devxlogo

Approaches to Indexing Multiple Logs File Types in Solr and Setting up a Multi-Node, Multi-Core Solr Cloud

Approaches to Indexing Multiple Logs File Types in Solr and Setting up a Multi-Node, Multi-Core Solr Cloud

Introduction

Apache Solr is a widely used open source search platform that internally uses Apache Lucene based indexing. Solr is very popular and provides a database to store indexed data and is a very high scalable, capable search solution for the enterprise platform. This article provides a basic vision for a single and multi-core approach to indexing and querying multiple log file types in Solr. Solr indexes the log files generated by the servers and allows searching the logs for troubleshooting. It has the capability to scale to work in a multi-node cluster set up in a distributed and fault tolerant manner. These capabilities are collectively called SolrCloud. Solr uses Zookeeper for working in a distributed manner

Approaches to Indexing Multiple Log File Types

Single Solr schema to Index disparate log files types

In this first approach to Solr index; each set of log types has single index and schema definition. In Solr setup each single core is associated with one Solr schema definition and configuration. This figure shows the high level architecture of single Solr index with a different set of log file types. For instance, here we used it to index web and application server logs; these logs are getting indexed based on the schema definition.


Figure 1. Single Solr schema and index for multiple log types

We define the fields of the log file document that need to be indexed in the schema.xml file. As the log files get indexed and it generates the index file, the generated index resides in the Solr core data folder.

Consider these log files different set of fields. For example:

Web log fields: date, time, time-taken, cs-method, cs-uri, sc-status, etc.

App log fields: date, time taken, server-name, server-ip, site-name, cs-method, etc.

There are two ways we can generate the schema file for indexing:

  • If the fields are the same in web and app server logs, then we can directly define the filed names and type in the schema file for indexing
  • If the fields are unique in web and app server logs, then define the fields as dynamic fields in the schema file for indexing

Solr Schema Definition

In the Solr schema definition file, define the required common and dynamic fields and types. Define which field should be used as the unique key and then define how to index and search the fields using Solr query.

Schema.xml

    	                 	   	   	   	 	               uid everything 

Consider this different kind of log file type indexing scenario. We can generate a separate schema (multiple index) for each log file type or you should merge fields into a single index. In this single index, require the identifier filed to identify the log file type whether it is a web logs or app logs type.

Sample program to generate an index for different log file types

Refer to this link to set up, configure and start the Solr. Create the logsearch under the Solr folder and generate the schema file for indexing logs data. Start the Solr and use the link to view the Solr UI.

We have created a sample Java program to index the different kind of log data. This client program generates an index and the generated indexes located in Solr data index location.

package com.apachesolr.infy.client;public class GenerateSolrIndex {	/**** GET solr connection		@return		@throws Exception *****/public static HttpSolrServer getSolrConnection() throws Exception {	HttpSolrServer solrServer = null;	try {// configure a server object with actual solr values.		if (solrServer == null) {		solrServer = new HttpSolrServer(	"http://localhost:8983/solr/logsearch");		solrServer.setParser(new XMLResponseParser());		}} catch (Exception exc) {			exc.printStackTrace();	}		return solrServer;	}	/****Index Web logs and App logs data		@return		@throws Exception *****/public Collection addWebLogData() throws Exception {	File file = new File("D:\Solarsetups\samplelogs\Weblog.log");	BufferedReader bufferedReader = null;	Collection inputDocuments = new ArrayList();	String logtype = "weblogs";	InputStream is = null;	int i = 0;	int j = 0;	is = new FileInputStream(file);	bufferedReader = new BufferedReader(new InputStreamReader(is));	for (String line; (line = bufferedReader.readLine()) != null && i 

Solr Query

Solr provides a UI to test, debug and set the request query parameters in Solr for searching indexed data, you can find it here. We can pass the query string parameter in q box such as logtype: weblogs or directly we can pass the string as weblogs or Applogs.

Search indexed data by log file type

Weblogs Returns only web server indexed log data
Applogs Returns only App server indexed log data
*:* Returns both app and web log data

Multiple Solr schema to Index disparate log files types

This is the second approach for a Solr index. Each log file type has a separate index and schema definition. In the Solr setup single core is associated with one Solr schema definition and configuration. This figure shows a high level architecture of multiple Solr indexes with different sets of log file types. For instance here we used it to get indexed with web and app server logs. These logs are getting indexed based on the schema definition.


Figure 2. Each log type has separate Solr schema and index

Setting up Multi-Node, Multi-Core Solr Cloud

Why we need SolrCloud

In a typical enterprise scenario millions of documents may need to be indexed and index sizes may go up to 100s of GBs. In this situation, we will need the indexes to be distributed across servers as well as require replication so that the set up can handle fail over scenarios. A distributed structure also helps to share the load of search queries across the servers. In the context of SolrCloud we come across terms like index, Solr Core and collection.

A single core, single instance Solr set up will be associated with a single schema as defined in the schema.xml file of Solr. We define the fields of the document that need to be indexed in the schema.xml file. As the documents get indexed it generates the index that resides in the designated data folder of the Solr Instance.

As we may need to scale the Solr set up to achieve the quality of service requirements such as high availability, 100s of users querying across 100s of GBs of indexed files, we will need to scale the solution to work in a distributed manner. In such scenario we will need to have distributed indexes across multiple Solr instances that may be on a single physical server or multiple servers. This capability of Solr is called SolrCloud.

Multiple Solr instances are managed by Zookeeper servers. Solr comes packaged with embedded zookeeper servers. The user also has the option to use an external zookeeper.


Figure 3. Single core Solr instance

When do you need a multi-core Solr setup?

In an organization there will be disparate kind of documents that need to be indexed for different purposes. For example, in a news portal site there might be a need to index the articles that the organization publishes as well a need to index the webserver logs for searching log files. In this case we need completely different set fields that we will be interested to index. As a result, we need not have separate Solr instances altogether for indexing. We will need two separate cores running as a part of single Solr instance. Each core will be associated with a single index. For example, the LogCore will maintain the index of the application log files and the ArticleCore will maintain the index of the news articles the news portal publishes.

A collection in Solr context is one logical index that may have been distributed across multiple Solr cores. In our example, if we scale the multi-core Solr set up to multiple Solr instances, we will have two collections. One will be an article collection and the other one will be a log collection.


Figure 4. Multi-core Solr instance

How to create a multi-core, multi-instance Solr set up

Here we will take the above example and create two cores in our Solr instance. One core for storing the articles which we will name as 'articlecore' and the other core which will contain the index of logs will be named as 'logcore'.

The easiest way to set up the multi-core Solr set up is to modify the multi-core example that comes as a part of the Solr distribution. For example, if we unzip the Solr distribution zip files at location Solr-4.1.0

Then go to the /Solr-4.1.0/example/multicore folder. It contains the solr.xml which needs to be modified to handle multiple cores as per our requirement.

By default the multi-core example is to handle two cores namely core0 and core1. For our example, let's rename the 'core0' folder to 'articlecore' and the 'core1' folder to 'logcore'.

Then we need to modify the solr.xml file.

So now we have one of the instances ready. Our objective is to build a multi-instance, multi-core set up. So we need similar configuration in another Solr set up. If we are setting up the multi-node Solr in two servers named 'searchbox1' and 'searchbox2'. Then the configuration mentioned above needs to be completed in both the boxes.

As mentioned earlier, we need a zookeeper server to take care of the cluster setup. Solr comes with its own embedded zookeeper. A single zookeeper can manage both of the Solr instances or we can use an ensemble of zookeepers to manage the Solr instance. For our example we will use a single zookeeper instance that will run on 'searchbox1'.

On the example directory we can run the following command to start Solr instance in 'searchbox1'.

java -Dsolr.solr.home=multicore  -Dbootstrap_conf=true -DzkRun=searchbox1:9983 -DzkHost=searchbox1:9983  -DnumShards=2 -jar start.jar

Details of Arguments:

-Dsolr.solr.home=multicore: This argument will tell Solr to use the multicore folder under example to be used as the Solr home. By default the Solr folder under example is used as the default Solr home. This way the solr.xml configuration changes that we did to tell that two cores namely 'articlecore' and 'logcore' will get started along with the Solr instance.

-Dbootstrap_conf=true: This argument tells Solr to upload the Solr configurations to zookeeper server.

-DzkRun=searchbox1:9983: This argument tells embedded zookeeper to be started in port number 9983

-DzkHost=searchbox1:9983: This argument tells Solr that the zookeeper that will manage the Solr instance is running in 'searchbox1' on port number 9983

-DnumShards=2: This argument tells Solr that there will be 2 shards or instances as a part of this set up

Once the Solr instance is up in 'searchbox1', we need to go to example folder 'searchbox2' and use the command:

java -DzkHost=searchbox1:9983 -Dsolr.solr.home=multicore -jar start.jar

Details of Arguments:

-DzkHost=searchbox1:9983: This command tells this instance of Solr to use the zookeeper in 'searchbox1'

Once this command is run, 'searchbox2' joins with 'searchbox1' to create a cluster. The 'articlecore' in 'searchbox1' and 'searchbox2' will be one logical index or collection. The same is the case with 'logcore'.

?

Kalpana Cis a Technology Analyst with the ILCLOUD at Infosys Labs. She has a decade of experience in Java/J2EE, Big Data related frameworks and technologies.

Priyadarshi Sahoo is a Technology Lead at Infosys Ltd. He has more than 8 years of experience in Java/J2EE related technologies.

devxblackblue

About Our Editorial Process

At DevX, we’re dedicated to tech entrepreneurship. Our team closely follows industry shifts, new products, AI breakthroughs, technology trends, and funding announcements. Articles undergo thorough editing to ensure accuracy and clarity, reflecting DevX’s style and supporting entrepreneurs in the tech sphere.

See our full editorial policy.

About Our Journalist