Sitecore 9 : Content Editors Federated Authentication with Gmail

Recently in one of my Sitecore project, I got a requirement where content editor can log in using third party identity provider like google. In my previous project, I have used multiple times to authenticate the website user but for the Sitecore content user it was a bit different. There were multiple articles which I referred to implement this and this article is basically a consolidation of those articles along with some changes related to user builder and google authentication provider. Below are few references which are worth reading as they provide the flow in depth.

https://doc.sitecore.net/sitecore_experience_platform/developing/developing_with_sitecore/federated_authentication/using_federated_authentication_with_sitecore

http://blog.baslijten.com/enable-federated-authentication-and-configure-auth0-as-an-identity-provider-in-sitecore-9-0/

https://doc.sitecore.net/sitecore_experience_platform/developing/developing_with_sitecore/federated_authentication/configure_federated_authentication

The code I listed is based on the repository provided by BasLitjen at

https://github.com/BasLijten/sitecore-federated-authentication/tree/master/BasLijten.FederatedAuthentication

I have provide the code mentioned in this article at github repository which is ready to use at https://github.com/rdhaundiyal/SitecoreFederatedAuthenticationGmail

Though Sitecore 9 provides out of the box feature for OWIN authentication, there are few places where you might end up writing some piece of custom code. Below article shows how you can authenticate the content editor through google.

Before starting the Sitecore part make sure you have created a google application and have corresponding client id and secret which can be used for google authentication.

To create a google application for your application integration please refere to:

https://doc.sitecore.net/social_connected/setting_up_social_connected/configuring/walkthrough_configuring_social_connector_to_work_with_a_social_network

Steps:

  1. Start with creating a new project in Visual studio 2017, .net framework 4.6.2 and select the class library option.
  2. Add nugget package Microsoft.Owin.Security.Google and Microsoft.asp.net.identity
  3. Add references to Sitecore.Kernel, Sitecore.Owin, Sitecore.Owin.Authentication, Add reference to System.web and Microsoft.Owin.Security.Google;
  4. Create a class GmailIdentityProcessor inheriting from IdentityProvidersProcessor
  5. Override the ProcessCore method where you set the provider to GoogleOAuth2AuthenticationProvider which is provided by Microsoft identity providers and in the last set app to use googleauthentication as below
args.App.UseGoogleAuthentication(new GoogleOAuth2AuthenticationOptions()
{
ClientId =ClientId,
ClientSecret = ClientSecret",
Provider = provider
});
  1. In the app_config\include add the file Sitecore.Owin.Authentication.Enabler.config. The only change done in this file is enabling FederatedAuthentication as below
    <settings>

    <setting name=”FederatedAuthentication.Enabled”>

    <patch:attribute name=”value”>true</patch:attribute>

    </setting>

    </settings>

  2. Add GmailIdentityProvider.config to app_config\include

Changes that need to be done in this config are In the pipleline for identity provider, we added our own custom provider

<pipelines>

<owin.identityProviders>

<!– Processors for coniguring providers. Each provider must have its own processor–>

<processor type=”SitecoreGmailAuth.Processor.GmailIdentityProcessor, SitecoreGmailAuth” resolve=”true” />

</owin.identityProviders>

</pipelines>

 

Please note in identity provider section you have to give the exact id of property you created in custom identity provider

<identityProviders hint=”list:AddIdentityProvider”>

<identityProvider ref=”federatedAuthentication/identityProviders/identityProvider[@id=’Google’]” />

</identityProviders>

  1. And last update is the setting for google client id and secret
     

<settings>

<setting name=”FedAuth.Google.ClientId” value=”yourclientid.apps.googleusercontent.com” />

<setting name=”FedAuth.Google.ClientSecret” value=”yourclient secret” />

<setting name=”FedAuth.Google.Domain” value=”Sitecore” />

 

</settings>

Reset the application pool and try to run the Sitecore instance. You will get a screen like below with an additional button to login using google.

SCGL-initial

On clicking you will be redirected to google login page and after providing the user you will be redirected back to Sitecore login page with the error below

SCGL-first login

Now the user is created in sitecore but it does not have any access to the system. Admin user need to provide the access so that the user can use Sitecore cms as editor.

Before that, one more thing we need to change. The default implementation of  ExternalUserBuilder in Sitecore create a user name with a GUID which is very difficult to identify. To resolve this issue, create another class CustomUserBuilder inheriting from ExternalUserBuilder and override the CreateUniqueUserName method to pass email as user id.

protected virtual string CreateUniqueUserName(UserManager userManager, ExternalLoginInfo externalLoginInfo)
{
Assert.ArgumentNotNull((object)userManager, nameof(userManager));
Assert.ArgumentNotNull((object)externalLoginInfo, nameof(externalLoginInfo));
IdentityProvider identityProvider = this.FederatedAuthenticationConfiguration.GetIdentityProvider(externalLoginInfo.ExternalIdentity);
if (identityProvider == null)
throw new InvalidOperationException("Unable to retrieve identity provider for given identity");
string domain = identityProvider.Domain;
return domain + "\\" + externalLoginInfo.Email;
}

Now, if you login as a admin user you will see the user created in Sitecore

SCGL-Role

Provide appropriate member role to the user. The user should now be able to login.

SCGL-logged in

One important thing to take care of while using external provider is that the access to URL should be protected from the website user or else you will end up having so many users created in the Sitecore system which are not content editor and also it can be a possible security threat as well.

Advertisements

Autofac as DI container in Sitecore Helix architecture

The following article describe how to use Autofac as DI container in Sitecore application based on Helix architecture. As you must be knowing in Helix architecture the whole application functionality is divided into multiple features with each feature being an independent functionality not dependent on other features. This article is based on the article written by Kevin Brechbühl at https://ctor.io/one-way-to-implement-dependency-injection-for-sitecore-habitat/

The only difference is that instead of creating processes in individual feature project I will be registering all the dependency at one place using the module feature of autofac.

To start with we will create a project in foundation layer with name “ProjectName.Foundation.DependencyInjection” . I have given my project name as Piccolo and hence the project name will be “Piccolo.Foundation.DependencyInjection

Follow the following steps after creating the project

  1. Add nugget package for Autofac to the project. Command line for nugget is as below: Install-Package Autofac.Mvc5 -Version 4.0.2
  2. We will be creating a custom pipeline in order to set autofac as dependency injection container in Sitecore pipeline. Add folder named Pipelines in the “Piccolo.Foundation.DependencyInjection”
  3. Inside Pipelines add another folder InitializeContainer and within InitializeContainer add a class InitializeContainer.cs.
  4. Add another folder Foundation to the project “Piccolo.Foundation.DependencyInjection” and add config file Foundation.DependencyInection.config to it.

The project should look like as below once you have finished the above steps.

autofac-sitecore-project

Populate the class InitializeContainer with following listing

public class InitializeContainer

{

public void Process(PipelineArgs args)

{

var builder = new ContainerBuilder();

// Register dependencies in controllers

var assembliesInAppDomain = AppDomain.CurrentDomain.GetAssemblies().ToArray();

builder.RegisterControllers(assembliesInAppDomain);

// Register dependencies in filter attributes

builder.RegisterFilterProvider();

// Register dependencies in custom views

builder.RegisterSource(new ViewRegistrationSource());

builder.RegisterAssemblyModules(assembliesInAppDomain);

var container = builder.Build();

// Set MVC DI resolver to use our Autofac container

DependencyResolver.SetResolver(new AutofacDependencyResolver(container));

} }

As you can see in the listing, all the module and controller within the appdomain assemblies are registerd in the lines below

var assembliesInAppDomain = AppDomain.CurrentDomain.GetAssemblies().ToArray();

builder.RegisterControllers(assembliesInAppDomain);

builder.RegisterAssemblyModules(assembliesInAppDomain)

Add the following configuration in Foundation.DependencyInjection.config where we are adding our process just before sitecore calls its initialize controller factory


<configuration xmlns:patch=”http://www.sitecore.net/xmlconfig/”&gt;

<sitecore>

<pipelines>

<initialize>

<processor type =”Piccolo.Foundation.DependencyInjection.Pipelines.InitializeContainer.InitializeContainer, Piccolo.Foundation.DependencyInjection”

patch:before=”processor[@type=’Sitecore.Mvc.Pipelines.Loader.InitializeControllerFactory, Sitecore.Mvc’]” />

</initialize>

<initializeContainer>

</initializeContainer>

</pipelines>

</sitecore>

</configuration>

Now in individual Feature modules we need to create autofac module which will automatically be picked up by our sitecore custom pipleline.

For illustration, I will take one feature as example. The name of feature project is Piccolo.Feature.Gallery

Add a class GalleryModule to the project inherited from module where you will register all the dependencies for corresponding feature


public class GalleryModule:Module

{

protected override void Load(ContainerBuilder builder)

{

builder.RegisterType().As<IRepository>();

}

}

For each feature you can create a module in same way.

Now the GalleryController in the above feature expects a constructor with parameter IRepository which will be provided by autofac.

public class GalleryController : Controller

{

private IRepository _imageArticleRepository;

public GalleryController(IRepository imageArticleRepository)

{

_imageArticleRepository = imageArticleRepository;

}

// GET: Gallery

public ActionResult Search(string searchTerm = “”)

{

var rootPath = Sitecore.Context.Database.GetItem(“/sitecore/content/Home/Album”);

var result = _imageArticleRepository.Get(rootPath.ID);

return View(result);

}

}

Autofac will automatically inject the dependency in to constructor into the controller which will then be available for use in the controller.

Tridion Object Cache – Apache MQ base invalidation

The following code below is based on the dd4t-cachechannel jar file available at here. More discussion on this topic is mentioned in a question asked on stackexchange here

Tridion object cache invalidation using JMSCacheChannelConnector provided with Tridion CD send messages in binary which DD4T 2.0 client based on .net is not able to understand. The jar file available in link above acts as a deployer extension and converts the binary message into textmessages so that any .net client can understand it. The problem with this is once it gets converted into text message, Tridion object cache subscriber in cd_cache jar start throwing error “Ignoring unexpected message type”

To solve this issue we need to modify the subscriber in cd_cache as well so that it should be able to understand text message. Below is the full code listing of the changes done in dd4t-cachechannel jar file.




package org.dd4t.cache;

import com.fasterxml.jackson.core.JsonProcessingException;



import com.tridion.cache.CacheChannelEventListener;
import com.tridion.cache.CacheEvent;
import com.tridion.cache.CacheException;
import com.tridion.cache.JMSCacheChannelConnector;
import com.tridion.configuration.Configuration;
import com.tridion.configuration.ConfigurationException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import javax.jms.*;
import javax.jms.IllegalStateException;
import javax.naming.Context;
import javax.naming.InitialContext;
import javax.naming.NamingException;


import java.util.List;
import java.util.Properties;



public class TextJMSCacheChannelConnector extends JMSCacheChannelConnector {



private static Logger LOG = LoggerFactory.getLogger(TextJMSCacheChannelConnector.class);
private volatile boolean isValid = false;
private boolean isClosed = false;
private void verifyOpenState() throws IllegalStateException
{



if (this.isClosed) {
throw new IllegalStateException("Method was called on closed instance");
}



/*throw new ArithmeticException("You can\'t divide by zero!"); */ }
private MessageListener jmsTopicListener = new MessageListener()
{
public void onMessage(Message message)
{
handleJmsMessage(message);
}
};
private ExceptionListener jmsExceptionListener = new ExceptionListener()
{
public void onException(JMSException e)
{
handleJmsException(e);
}
};
private void handleJmsException(JMSException exception)
{
if (!this.isClosed) {
LOG.error("JMS Exception occurred. Attempting setting up JMS connectivity again", exception);
}
if (this.isValid)
{
this.isValid = false;
fireDisconnect();
}
}
private CacheChannelEventListener listener = emptyListener;
private static CacheChannelEventListener emptyListener = new CacheChannelEventListener()
{
public void handleRemoteEvent(CacheEvent event) {}



public void handleDisconnect() {}



public void handleConnect() {}
};
public void setListener(CacheChannelEventListener listener)
{
LOG.debug("Setting listner");
this.listener = (listener != null ? listener : emptyListener);
}
private void fireDisconnect()
{
if (!this.isClosed) {
this.listener.handleDisconnect();
}
}
public void validate()
throws CacheException
{



try {
verifyOpenState();
} catch (IllegalStateException e1) {
throw new CacheException("The conneciton is closed");
}



if (!this.isValid) {
try
{
this.client.cleanupIgnoringErrors();
this.client.connect(this.jmsTopicListener, this.jmsExceptionListener);
this.isValid = true;
this.listener.handleConnect();
}
catch (JMSException e)
{
this.client.cleanupIgnoringErrors();
throw new CacheException("JMS Exception occurred. Attempting setting up JMS connectivity later", e);
}
catch (NamingException e)
{
this.client.cleanupIgnoringErrors();
throw new CacheException("Unable to initialize JMS CacheChannelConnector", e);
}
}
}



protected void handleJmsMessage(Message msg)
{
LOG.debug("-----------------------------------------------------------------");
try {
if (msg instanceof TextMessage)
{
TextMessage textMessage = (TextMessage) msg;
Object payload;
payload =CacheEventSerializer.deSerialize(textMessage.getText());



if ((payload instanceof CacheEvent))
{
if (!this.isClosed) {



this.listener.handleRemoteEvent((CacheEvent)payload);
}
else
{
LOG.debug("Aborting, Listener is already closed");
}
}
else
{
LOG.debug("Payload is not an instance of cacheEvent");
}
}
else
{ LOG.debug("Handle JMS Object Message");
super.handleJmsMessage(msg);
}
} catch (Exception e) {



LOG.error(e.getMessage());
}
LOG.debug("-----------------------------------------------------------------");
}



public void configure(Configuration configuration) throws ConfigurationException {
LOG.info("Loading TextJMSCacheChannelConnector");
Properties jndiContextProperties = null;
if (configuration.hasChild("JndiContext"))
{
Configuration jndiConfig = configuration.getChild("JndiContext");
jndiContextProperties = new Properties();
List configs = jndiConfig.getChildrenByName("Property");
for (Configuration config : configs)
{
String propertyKey = config.getAttribute("Name");
String propertyValue = config.getAttribute("Value");
jndiContextProperties.setProperty(propertyKey, propertyValue);
LOG.debug("JMS Connector JNDI Property '{}' set with value '{}'",propertyKey,propertyValue);
}
}
String topicName = configuration.getAttribute("Topic", "TridionCacheChannel");
String topicConnectionFactoryName = configuration.getAttribute("TopicConnectionFactory", "TopicConnectionFactory");



LOG.debug("JMS Connector TopicConnectionFactory name is {}. Topic is: {}",topicConnectionFactoryName, topicName);



String strategy = configuration.getAttribute("Strategy", "AsyncJMS11");



LOG.debug("JMS strategy is: {} ", strategy);



if (("AsyncJMS11".equals(strategy)) || ("AsyncJMS11MDB".equals(strategy))) {
this.client = new TextJMS11Approach(jndiContextProperties, topicConnectionFactoryName, topicName, "AsyncJMS11MDB".equals(strategy));
} else if ("SyncJMS11".equals(strategy)) {
this.client = new SynchronousJMS11Approach(jndiContextProperties, topicConnectionFactoryName, topicName);
} else if (("AsyncJMS10".equals(strategy)) || ("AsyncJMS10MDB".equals(strategy))) {
this.client = new TextJMS10Approach(jndiContextProperties, topicConnectionFactoryName, topicName, "AsyncJMS10MDB".equals(strategy));
} else {
throw new ConfigurationException("Unknown 'Strategy':" + strategy + " for the JMS Connector");
}



}



public class TextJMS11Approach extends JMSCacheChannelConnector.JMS11Approach {
private TopicConnection topicConnection = null;
private TopicSession topicPublisherSession = null;
private TopicPublisher topicPublisher = null;
private TopicSubscriber topicSubscriber = null;
private TopicSession topicSubscriberSession = null;



protected TextMessage publicationTextMessage;



public TextJMS11Approach(Properties jndiProperties, String factoryName, String topicName, boolean isMDBMode) {
super(jndiProperties, factoryName, topicName, isMDBMode);
}



public void connect(MessageListener messageListener, ExceptionListener exceptionListener) throws JMSException, NamingException {
Context jndiContext = this.jndiProperties != null ? new InitialContext(this.jndiProperties) : new InitialContext();
TopicConnectionFactory topicConnectionFactory = (TopicConnectionFactory)jndiContext.lookup(this.topicConnectionFactoryName);
Topic topic = (Topic)jndiContext.lookup(this.topicName);



this.topicConnection = topicConnectionFactory.createTopicConnection();
if (!this.isMDBMode) {
try
{
this.topicConnection.setExceptionListener(exceptionListener);
}
catch (JMSException e)
{
TextJMSCacheChannelConnector.LOG.error("setExceptionListener failed. Most likely due to container restrictions. In these environments the MDB com.tridion.cache.JMSBean must be setup instead", e);
}
}
this.topicConnection.start();
if (!this.isMDBMode) {
try
{
this.topicSubscriberSession = this.topicConnection.createTopicSession(false, 1);
this.topicSubscriber = this.topicSubscriberSession.createSubscriber(topic, null, true);
this.topicSubscriber.setMessageListener(messageListener);
}
catch (JMSException e)
{
TextJMSCacheChannelConnector.LOG.error("setMessageListener failed. Most likely due to container restrictions. In these environments the MDB com.tridion.cache.JMSBean must be setup instead", e);
}
}
this.topicPublisherSession = this.topicConnection.createTopicSession(false, 1);
this.topicPublisher = this.topicPublisherSession.createPublisher(topic);
this.publicationTextMessage = this.topicPublisherSession.createTextMessage();
LOG.debug("Connected to queue, with topic: {} ", topic);
}



public void broadcastEvent(CacheEvent event) throws JMSException {
try {
String serialized = CacheEventSerializer.serialize(event);
this.publicationTextMessage.setText(serialized);
this.topicPublisher.publish(this.publicationTextMessage);
LOG.debug("Published event: {}", serialized);
} catch (JsonProcessingException e) {
LOG.error("Cannot serialize cache event into JSON", e);
}
}
}



public class TextJMS10Approach extends JMSCacheChannelConnector.JMS10Approach {
public TextJMS10Approach(Properties jndiProperties, String factoryName, String topicName, boolean isMDBMode) {
super(jndiProperties, factoryName, topicName, isMDBMode);
}
}
public class TextSynchronousJMS11Approach extends JMSCacheChannelConnector.SynchronousJMS11Approach {
public TextSynchronousJMS11Approach (Properties jndiProperties, String factoryName, String topicName) {
super(jndiProperties, factoryName, topicName);
}
}
}



<hr />



package org.dd4t.cache;


import java.io.IOException;


import com.fasterxml.jackson.core.JsonParseException;
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.JsonMappingException;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.tridion.cache.CacheEvent;



public class CacheEventSerializer {
private static final ObjectMapper OBJECT_MAPPER = new ObjectMapper();



public static String serialize(final CacheEvent cacheEvent) throws JsonProcessingException {
return OBJECT_MAPPER.writeValueAsString(cacheEvent);
}
public static CacheEvent deSerialize(final String textmessage) throws JsonParseException, JsonMappingException, IOException {


Dd4tCacheEvent dd4tCacheEvent;


dd4tCacheEvent = OBJECT_MAPPER.readValue(textmessage,Dd4tCacheEvent.class);
return new CacheEvent(dd4tCacheEvent.regionPath,dd4tCacheEvent.key,(int) dd4tCacheEvent.type);


}


}


<hr />


package org.dd4t.cache;


import com.fasterxml.jackson.annotation.JsonProperty;


public class Dd4tCacheEvent {


@JsonProperty("regionPath")
public String regionPath;
@JsonProperty("key")
public String key;
@JsonProperty("type")
public int type;
public Dd4tCacheEvent()
{}
}


As you can see in the code validate() method is overriden in the class TextJMSCacheChannelConnector so that we can provide our own handleJmsMessage(Message msg) method. This method checks if the message is a text message then convert it into CacheEvent object and pass to handleRemoteEvent() method of CacheChannel class to invalidate the message.

If the message is not a text message it will follow the normal flow.

Tridion content delivery website webfarm using azure file storage

While scaling a Tridion website on different Virtual Machines the major challenge which is usually encountered is keeping content on all servers in sync.

There are couple of approaches which can be considered

  1. Multiple deployers- each corresponding to one Virtual machine.

While publishing all the deployers can be configured on one publishing target so that the operation is performed in transaction.

Possible Issues:

  • Publishing time will increase with increase in VM’s
  • New deployer need to be configured or removed while scaling up or down i.e. addition or removal of a VM
  1. File replication script like robocopy

In this approach the files are published to a single physical location on a VM. A scheduler running on that VM will execute a robocopy script which will sync this folder to the website folder on different servers.

Possible Issues:

  • The publishing time and changes reflected depends on the frequency of the scheduler. For e.g. if the time set is scheduler is 5 min, the changes will reflected on the website after 5 min. Also with increase of VM’s as well as files and assets, this time will increase.
  • There is no guarantee that all the folders will be in sync as it is quite possible that script fails after syncing few VM

 

  1. Creating a shared network folder and pointing all the website so that all the content is published to this shared folder and all the website instance on different VM point to this one.

Possible Issues:

This approach does not have any of the issue mentioned above but the major challenge is SPOF. If due to some reason the network folder is not available it will result in all the website bringing down. Also, you will have to provide some explicit mechanism of backup else the data will be lost.

 

  1. Quite similar to above approach but without the single point of failure issue is using azure file storage which has high availability as well as high performance. With Azure File Storage, the web content can now be stored independently of the web server.

Possible Issues:

If the file storage is in a different geographical location, there might  be performance issues

Azure file storage is a highly scalable and highly available file storage which can be accessed by application running on different VM on azure just like a network shared path

 

Drawing1

Following information will be required while implementing Tridion CD website using azure file storage

 

Account Name                  azureusername

File Endpoint                      https://<account&gt;.file.core.windows.net

Account Key                       xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

 

Steps to configure shared file storage for  a webfarm

  1. Create an active directory user with same name as account name and password same as account key of azure file storage. If no active directory is available you will have to create local user with same name password on each VM hosting the application which will be difficult to maintain but will work. Remember to set password to never expire.
  2. Create web application on IIS and in physical path provide the UNC of azure shared file storage. Make sure you have copied all the website physical assets in a folder on this shared storage.

1

  1. Click on connect as and select specific user radio button

 

2

  1. Provide the credentials of the domain user and password and click ok3
  2. Since xmogrt.dll is not a .NET assembly, it will not be accessible from network location. You will have to delete this dll from bin folder of your application and copy it to %Systemdrive%\windows\system32
  3. Add the domain user  to IIS_IUSRS group on local system
  4. Recycle application pool
  5. Repeat step no. 2 to 6 on each web server which are going to be attached to loadbalancer.

 

 

Setting up deployer to publish to shared file storage

There are only two changes required in http deployer to deploy files on shared storage.

  1. In cd_storage.xml, in storage section for file system provide UNC path of storage

<Storage Type=”filesystem” Class=”com.tridion.storage.filesystem.FSDAOFactory” Id=”defaultFile” defaultFilesystem=”false”>

<Root Path=” \\<account>.file.core.windows.net\websitedirectory\ ” />

</Storage>

 

  1. Create a new application pool with identity set to custom account. Here you need to specify the credentials of domain user which you created earlier so that deployer can access the shared storage.
  2. Assign this application pool to your httpupload application

 

References

http://blogs.iis.net/davidso/azurefile

https://azure.microsoft.com/en-gb/documentation/articles/storage-dotnet-how-to-use-files/