Load Sharing

In a load sharing scenario, you are running several search applications which are all using the same data. Normally, all search applications are setup behind a load balancer which assigns queries. Per definition, one of these instances is defined as the master instance which hands over data to its slave instances. The UI application is linked to this master instance. You can decide if the master instance actually handles product queries or if it serves as a staging environment. This section expects three productive search instances (search01search02 and search03), with search01 being the search master and all having an independent file system.

This scenario also allows you to create a fallback system for added redundancy.

Configuring Analytics

You must ensure that Analytics has access to the productive search applications' search and tracking logfiles. If every instance has its own resource folder, there are two ways to do this: Either grant Analytics file access to the folders or transfer the logfiles from the resource folders to the Analytics server. The search applications only write the logfiles and have no further use for them after doing so. You have to separate them by search instance on the Analytics server. Here's how the target structure could look on the Analytics Server:

/opt/factfinder/search01
  |-- searchLogs
    |-- channel-a
    	|-- daily
  	    	|-- ff.20121126.log
 		    |-- ff.20121126.log.catalog
  |-- scicLogs
    |-- shoppingcart.2012-12-10.log
    |-- shoppingcart.2012-12-11.log
/opt/factfinder/search02
  |-- ...
/opt/factfinder/search03
  |-- ...

You also have to change paths in the applications.properties (see: Configuring FACT-Finder).

ffa.persistence.searchLogDirectory=/opt/factfinder/search01/searchLogs,/opt/factfinder/search02/searchLogs,/opt/factfinder/search03/searchLogs
ffa.persistence.scicLogDirectory=/opt/factfinder/search01/scicLogs,/opt/factfinder/search02/scicLogs,/opt/factfinder/search03/scicLogs

Configuration for using Network Memory

If all search instances share the same resource folder, you have to set an additional Java option (see Software Setup). Otherwise, all instances will write to the same logfile which will lead to problems. Add the follwoing option to your setenv.sh file, exchanging  hostname for your used names (i.e. search01):

-Dfff.node.logs.subdirectory='hostname'

If this option is active, the search application will create a sub-directory in the logfile folder using hostname as its name. You can check this for the master insctance via the UI application at System -> System Information -> Check Paths. To check the slave instances, use this URL:

http://<search app URL>/Messages.ff?do=showSystemMonitoringOverview

If the paths are displayed correctly, amend the Analytics' applications.properties configuration file to reflect these.

Configuring Search instances

As mentioned before, we expect the search01 instance to be the master application. This means that the FACT-Finder user makes all changes to this instance, making it imperative to link the UI to it. THis instance is also the only one to import shop product data and to create FACT-FInder databases which are handed over to the other instances. It is necessery to create a connection between the master search instance and Analytics to enabel an import. Please check your settings under System -> System Information.

Hint: Imports are normally timed, so you should note the change date. You can also trigger an import via APIs and afterwards perform a synchronisation.

Changes to the Master Search Application

To facilitate data synchronisation, move the scheduler configurations from the FACT-Finder resources folder to a different location. We advise you to simply move the conf/scheduler one level up, so that it's directly in the application-specific resource folder (i.e. /opt/factfinder/fact-finder/scheduler). To ensure that FACT-Finder can find this folder, add the following line to the conf/fff.properties configuration file (or change the existing line to the example):

scheduler.directory={APP_RESOURCES}/scheduler/

Please check if the changes have been successful at System -> System Information by first clicking "Reload Configuration" and then reviewing the entries under "Scheduled Tasks".

Synchronising Ressources

To keep the search slaves up to date, you need to set up a synchronisation with their master. After changing the scheduler directory as above, you can do this via rsync. You need to keep the contents of the following sub-directories synchronous::

  • analytics
  • campaigns
  • conf
  • customClasses
  • indexes

The next step is to load the search masters configuration files into all search slaves. Since we edited the conf/fff.properties file in the previous step, success of the synchronistion can be checked there.

Changes to the Slave Search Applications

  • All imports and recurring tasks have to be disabled for the search slaves, as these are copied from the master application. First, copy the  scheduler directory from master to slave, the remove all of its properties files except  clearCache.properties and reloadAllDatabases.properties. Copy this modified folder to all slave instances. These changes are not overwritten, because the folder will not be synchronised from the master.

Configurations for using Network Memory

If you are using a network memory, all instances automatically access the same configurations and databases, so the synchronisation step can be skipped. You should still make sure only one search environment performs imports, so some small modifications are necessary..

You need to set a new Java option in der setenv.sh which allows the scheduler to uses different files. Add the following option and replace the  hostname value with your own setting.

-Dinstance='hostname'

Next, use this option in the scheduler's path definition. Just as before, you need to amend  conf/fff.properties with the following entry:

scheduler.directory={APP_RESOURCES}/conf/scheduler/{prop:instance}/

Next, create a new sub-directory for each instances at /conf/scheduler , naming each folder the same as the hostname (in our example search01search02search03). Next, move the properties-files under conf/scheduler to the search master's sub-directory (search01).

Afterwards, copy the files clearCache.properties and reloadAllDatabases.properties to each saerch slave directory.

Configuring the Loadbalancer

If the Personalization module is active, it's important to have each session's queries answered by the same instance. To ensure this, you need to configure the loadbalancer session-sticky and manage the assignments via the sid parameter. This parameter can appear in both the HTTP header as well as the URL parameters, so make sure to check both sites.