Directing Sling job logging output to separate files?

classic Classic list List threaded Threaded
9 messages Options
Reply | Threaded
Open this post in threaded view
|

Directing Sling job logging output to separate files?

John Logan
Hi,


I'm using the Sling job manager to handle some long running tasks, and would like to direct the log output for each job to its own file at a job-specific path.  Is there a straightforward way to achieve this?


Thanks!


John Logan

Reply | Threaded
Open this post in threaded view
|

Re: Directing Sling job logging output to separate files?

Robert Munteanu-2
Hi John,

On Fri, 2017-09-08 at 05:28 +0000, John Logan wrote:
> Hi,
>
>
> I'm using the Sling job manager to handle some long running tasks,
> and would like to direct the log output for each job to its own file
> at a job-specific path.  Is there a straightforward way to achieve
> this?

If your jobs use separate loggers, you can achieve that either by:

- manually creating loggers and appenders via http://localhost:8080/sys
tem/console/slinglog/
- adding specific loggers/appenders to the provisioning model

There might be a way of adding those at runtime using the logback APIs,
but I haven't tried it before.

Robert
Reply | Threaded
Open this post in threaded view
|

Re: Directing Sling job logging output to separate files?

John Logan
Thanks, Robert!  I was considering that configuring a logback SiftingAppender might be what's needed, but I wanted to check to see if there was anything else available.  I'll give it a try.


jgl

________________________________
From: Robert Munteanu <[hidden email]>
Sent: Friday, September 8, 2017 1:13:02 AM
To: [hidden email]
Subject: Re: Directing Sling job logging output to separate files?

Hi John,

On Fri, 2017-09-08 at 05:28 +0000, John Logan wrote:
> Hi,
>
>
> I'm using the Sling job manager to handle some long running tasks,
> and would like to direct the log output for each job to its own file
> at a job-specific path.  Is there a straightforward way to achieve
> this?

If your jobs use separate loggers, you can achieve that either by:

- manually creating loggers and appenders via http://localhost:8080/sys
tem/console/slinglog/
- adding specific loggers/appenders to the provisioning model

There might be a way of adding those at runtime using the logback APIs,
but I haven't tried it before.

Robert
Reply | Threaded
Open this post in threaded view
|

Re: Directing Sling job logging output to separate files?

John Logan
In reply to this post by Robert Munteanu-2
I got this working and thought I'd follow up with what I did in case anyone else needs this sort of thing.


I used SiftingAppender pretty much as shown in any of the examples one can find online.  I put the following logback.xml in my sling.home, and pointed the OSGI configuration variable org.apache.sling.commons.log.configurationFile to it:


<configuration>
  <appender name="SIFT" class="ch.qos.logback.classic.sift.SiftingAppender">
    <discriminator>
      <key>jobId</key>
      <defaultValue>/var/log/sling/error.log</defaultValue>
    </discriminator>
    <sift>
      <appender name="FILE-${jobId}" class="ch.qos.logback.core.FileAppender">
        <file>${logPath}</file>
        <layout class="ch.qos.logback.classic.PatternLayout">
          <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
        </layout>
      </appender>
    </sift>
  </appender>

  <root level="info">
    <appender-ref ref="SIFT" />
  </root>
</configuration>


An abstract wrapper implementation of JobExecutor sets up the MDC to conform to what's expected in the logback.xml, and provides a few other niceties.  Replace the SlingResourceProvider/StorageNode stuff with whatever you use to access files and Sling nodes; it's just a simple abstraction layer that we happen to be using.


package com.xyz.content.sling.processor;

import java.io.BufferedReader;
import java.io.IOException;
import java.nio.charset.StandardCharsets;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;

import org.apache.sling.api.resource.LoginException;
import org.apache.sling.api.resource.ResourceResolver;
import org.apache.sling.api.resource.ResourceResolverFactory;
import org.apache.sling.event.jobs.Job;
import org.apache.sling.event.jobs.consumer.JobExecutionContext;
import org.apache.sling.event.jobs.consumer.JobExecutionResult;
import org.apache.sling.event.jobs.consumer.JobExecutor;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.slf4j.MDC;

import com.xyz.content.sling.storage.SlingResourceProvider;
import com.xyz.storage.StorageNode;

/**
 * Sling job executor with build job logging.
 *
 * This class wraps build processing with job-specific log management.
 * Job submitters need to configure the following job properties
 * prior to submitting a job:
 * * jobId - A symbolic identifier for the job.  This value must be
 *           unique throughout the life of the job.  Subclasses
 *           may use this value as a job name for submitting
 *           and monitoring SLURM jobs.
 * * logPath - The pathname to the build log file.
 * * damLogPath - If specified, the DAM resource to which
 *                the log output should be copied upon build completion.
 * * clearLog - An option boolean parameter which, when true,
 *              removes the build log file if it exists prior
 *              to commencing the build.
 *
 * @author john
 *
 */
public abstract class BuildJobExecutor implements JobExecutor {
    private static Logger LOG = LoggerFactory.getLogger(BuildJobExecutor.class);

    /**
     * Retrieve a resource resolver factory for build processing.
     *
     * @return  the resource resolver factory
     */
    protected abstract ResourceResolverFactory getResolverFactory();

    /**
     * Subclass-specific build processing method.
     *
     * @param job
     * @param context
     * @param resolver
     * @return  the result of build processing
     */
    protected abstract JobExecutionResult build(Job job, JobExecutionContext context, ResourceResolver resolver);

    @Override
    public JobExecutionResult process(Job job, JobExecutionContext context) {
        //
        //  Prepare for the log directory and file.
        //
        final String jobId = job.getProperty("jobId", String.class);
        final Path logPath = Paths.get(job.getProperty("logPath", String.class));
        final Path logParentPath = logPath.getParent();
        if (logParentPath != null) {
            try {
                Files.createDirectories(logParentPath);
            }
            catch (final IOException e) {
                return handleError(context, "Unable to create log directory " + logParentPath);
            }
        }

        if (Boolean.TRUE.equals(job.getProperty("resetLog"))) {
            try {
                Files.deleteIfExists(logPath);
            }
            catch (final IOException e) {
                return handleError(context, "Unable to clear log file " + logPath);
            }
        }

        LOG.info("Starting build job with ID " + jobId);
        ResourceResolver resolver;
        try {
            resolver = getResolverFactory().getServiceResourceResolver(null);
        }
        catch (final LoginException e) {
            return handleError(context, "Unable get build job resource resolver, check cbservice user configuration.");
        }

        //
        //  Perform the build operation.  Logging to the job-specific log file starts here.
        //
        MDC.put("jobId", jobId);
        MDC.put("logPath", logPath.toString());
        try {
            final JobExecutionResult result = build(job, context, resolver);
            if (job.getProperty("damLogPath") != null) {
                final Path damLogPath = Paths.get(job.getProperty("damLogPath", String.class));
                saveLog(resolver, logPath, damLogPath);
            }
            return result;
        }
        catch (final Throwable e) {
            LOG.error("Build job failed with an exception.", e);
            return handleError(context, "Build job failed with an exception: " + e.getMessage());
        }
        finally {
            MDC.remove("jobId");
            MDC.remove("logPath");
            resolver.close();
        }
    }

    private JobExecutionResult handleError(JobExecutionContext context, String message) {
        LOG.error(message);
        return context.result().message(message).cancelled();
    }

    private void saveLog(ResourceResolver resolver, Path logPath, Path damLogPath) {
        try (BufferedReader reader = Files.newBufferedReader(logPath, StandardCharsets.UTF_8)) {
            final SlingResourceProvider storageProvider = new SlingResourceProvider();
            storageProvider.setResolver(resolver);
            final StorageNode damLogNode = storageProvider.get(damLogPath);
            damLogNode.copyFromReader(reader, StandardCharsets.UTF_8);
            resolver.commit();
        }
        catch (final IOException e) {
            LOG.error("Unable to move build log " + logPath + " to DAM resource " + damLogPath, e);
        }
    }
}



________________________________
From: Robert Munteanu <[hidden email]>
Sent: Friday, September 8, 2017 1:13:02 AM
To: [hidden email]
Subject: Re: Directing Sling job logging output to separate files?

Hi John,

On Fri, 2017-09-08 at 05:28 +0000, John Logan wrote:
> Hi,
>
>
> I'm using the Sling job manager to handle some long running tasks,
> and would like to direct the log output for each job to its own file
> at a job-specific path.  Is there a straightforward way to achieve
> this?

If your jobs use separate loggers, you can achieve that either by:

- manually creating loggers and appenders via http://localhost:8080/sys
tem/console/slinglog/
- adding specific loggers/appenders to the provisioning model

There might be a way of adding those at runtime using the logback APIs,
but I haven't tried it before.

Robert
Reply | Threaded
Open this post in threaded view
|

Re: Directing Sling job logging output to separate files?

chetan mehrotra
While providing a custom logback.xml do remember to add Sling specific
handlers [1]. Otherwise OSGi integration of logging config would not
work as expected

Chetan Mehrotra
[1] https://sling.apache.org/documentation/development/logging.html#external-config-file

On Thu, Sep 14, 2017 at 6:45 AM, John Logan <[hidden email]> wrote:

> I got this working and thought I'd follow up with what I did in case anyone else needs this sort of thing.
>
>
> I used SiftingAppender pretty much as shown in any of the examples one can find online.  I put the following logback.xml in my sling.home, and pointed the OSGI configuration variable org.apache.sling.commons.log.configurationFile to it:
>
>
> <configuration>
>   <appender name="SIFT" class="ch.qos.logback.classic.sift.SiftingAppender">
>     <discriminator>
>       <key>jobId</key>
>       <defaultValue>/var/log/sling/error.log</defaultValue>
>     </discriminator>
>     <sift>
>       <appender name="FILE-${jobId}" class="ch.qos.logback.core.FileAppender">
>         <file>${logPath}</file>
>         <layout class="ch.qos.logback.classic.PatternLayout">
>           <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
>         </layout>
>       </appender>
>     </sift>
>   </appender>
>
>   <root level="info">
>     <appender-ref ref="SIFT" />
>   </root>
> </configuration>
>
>
> An abstract wrapper implementation of JobExecutor sets up the MDC to conform to what's expected in the logback.xml, and provides a few other niceties.  Replace the SlingResourceProvider/StorageNode stuff with whatever you use to access files and Sling nodes; it's just a simple abstraction layer that we happen to be using.
>
>
> package com.xyz.content.sling.processor;
>
> import java.io.BufferedReader;
> import java.io.IOException;
> import java.nio.charset.StandardCharsets;
> import java.nio.file.Files;
> import java.nio.file.Path;
> import java.nio.file.Paths;
>
> import org.apache.sling.api.resource.LoginException;
> import org.apache.sling.api.resource.ResourceResolver;
> import org.apache.sling.api.resource.ResourceResolverFactory;
> import org.apache.sling.event.jobs.Job;
> import org.apache.sling.event.jobs.consumer.JobExecutionContext;
> import org.apache.sling.event.jobs.consumer.JobExecutionResult;
> import org.apache.sling.event.jobs.consumer.JobExecutor;
> import org.slf4j.Logger;
> import org.slf4j.LoggerFactory;
> import org.slf4j.MDC;
>
> import com.xyz.content.sling.storage.SlingResourceProvider;
> import com.xyz.storage.StorageNode;
>
> /**
>  * Sling job executor with build job logging.
>  *
>  * This class wraps build processing with job-specific log management.
>  * Job submitters need to configure the following job properties
>  * prior to submitting a job:
>  * * jobId - A symbolic identifier for the job.  This value must be
>  *           unique throughout the life of the job.  Subclasses
>  *           may use this value as a job name for submitting
>  *           and monitoring SLURM jobs.
>  * * logPath - The pathname to the build log file.
>  * * damLogPath - If specified, the DAM resource to which
>  *                the log output should be copied upon build completion.
>  * * clearLog - An option boolean parameter which, when true,
>  *              removes the build log file if it exists prior
>  *              to commencing the build.
>  *
>  * @author john
>  *
>  */
> public abstract class BuildJobExecutor implements JobExecutor {
>     private static Logger LOG = LoggerFactory.getLogger(BuildJobExecutor.class);
>
>     /**
>      * Retrieve a resource resolver factory for build processing.
>      *
>      * @return  the resource resolver factory
>      */
>     protected abstract ResourceResolverFactory getResolverFactory();
>
>     /**
>      * Subclass-specific build processing method.
>      *
>      * @param job
>      * @param context
>      * @param resolver
>      * @return  the result of build processing
>      */
>     protected abstract JobExecutionResult build(Job job, JobExecutionContext context, ResourceResolver resolver);
>
>     @Override
>     public JobExecutionResult process(Job job, JobExecutionContext context) {
>         //
>         //  Prepare for the log directory and file.
>         //
>         final String jobId = job.getProperty("jobId", String.class);
>         final Path logPath = Paths.get(job.getProperty("logPath", String.class));
>         final Path logParentPath = logPath.getParent();
>         if (logParentPath != null) {
>             try {
>                 Files.createDirectories(logParentPath);
>             }
>             catch (final IOException e) {
>                 return handleError(context, "Unable to create log directory " + logParentPath);
>             }
>         }
>
>         if (Boolean.TRUE.equals(job.getProperty("resetLog"))) {
>             try {
>                 Files.deleteIfExists(logPath);
>             }
>             catch (final IOException e) {
>                 return handleError(context, "Unable to clear log file " + logPath);
>             }
>         }
>
>         LOG.info("Starting build job with ID " + jobId);
>         ResourceResolver resolver;
>         try {
>             resolver = getResolverFactory().getServiceResourceResolver(null);
>         }
>         catch (final LoginException e) {
>             return handleError(context, "Unable get build job resource resolver, check cbservice user configuration.");
>         }
>
>         //
>         //  Perform the build operation.  Logging to the job-specific log file starts here.
>         //
>         MDC.put("jobId", jobId);
>         MDC.put("logPath", logPath.toString());
>         try {
>             final JobExecutionResult result = build(job, context, resolver);
>             if (job.getProperty("damLogPath") != null) {
>                 final Path damLogPath = Paths.get(job.getProperty("damLogPath", String.class));
>                 saveLog(resolver, logPath, damLogPath);
>             }
>             return result;
>         }
>         catch (final Throwable e) {
>             LOG.error("Build job failed with an exception.", e);
>             return handleError(context, "Build job failed with an exception: " + e.getMessage());
>         }
>         finally {
>             MDC.remove("jobId");
>             MDC.remove("logPath");
>             resolver.close();
>         }
>     }
>
>     private JobExecutionResult handleError(JobExecutionContext context, String message) {
>         LOG.error(message);
>         return context.result().message(message).cancelled();
>     }
>
>     private void saveLog(ResourceResolver resolver, Path logPath, Path damLogPath) {
>         try (BufferedReader reader = Files.newBufferedReader(logPath, StandardCharsets.UTF_8)) {
>             final SlingResourceProvider storageProvider = new SlingResourceProvider();
>             storageProvider.setResolver(resolver);
>             final StorageNode damLogNode = storageProvider.get(damLogPath);
>             damLogNode.copyFromReader(reader, StandardCharsets.UTF_8);
>             resolver.commit();
>         }
>         catch (final IOException e) {
>             LOG.error("Unable to move build log " + logPath + " to DAM resource " + damLogPath, e);
>         }
>     }
> }
>
>
>
> ________________________________
> From: Robert Munteanu <[hidden email]>
> Sent: Friday, September 8, 2017 1:13:02 AM
> To: [hidden email]
> Subject: Re: Directing Sling job logging output to separate files?
>
> Hi John,
>
> On Fri, 2017-09-08 at 05:28 +0000, John Logan wrote:
>> Hi,
>>
>>
>> I'm using the Sling job manager to handle some long running tasks,
>> and would like to direct the log output for each job to its own file
>> at a job-specific path.  Is there a straightforward way to achieve
>> this?
>
> If your jobs use separate loggers, you can achieve that either by:
>
> - manually creating loggers and appenders via http://localhost:8080/sys
> tem/console/slinglog/
> - adding specific loggers/appenders to the provisioning model
>
> There might be a way of adding those at runtime using the logback APIs,
> but I haven't tried it before.
>
> Robert
Reply | Threaded
Open this post in threaded view
|

Re: Directing Sling job logging output to separate files?

Robert Munteanu-2
In reply to this post by John Logan
On Thu, 2017-09-14 at 01:15 +0000, John Logan wrote:
> I got this working and thought I'd follow up with what I did in case
> anyone else needs this sort of thing.

Looks interesting, thanks for sharing.

Robert

>
>
> I used SiftingAppender pretty much as shown in any of the examples
> one can find online.  I put the following logback.xml in my
> sling.home, and pointed the OSGI configuration variable
> org.apache.sling.commons.log.configurationFile to it:
>
>
> <configuration>
>   <appender name="SIFT"
> class="ch.qos.logback.classic.sift.SiftingAppender">
>     <discriminator>
>       <key>jobId</key>
>       <defaultValue>/var/log/sling/error.log</defaultValue>
>     </discriminator>
>     <sift>
>       <appender name="FILE-${jobId}"
> class="ch.qos.logback.core.FileAppender">
>         <file>${logPath}</file>
>         <layout class="ch.qos.logback.classic.PatternLayout">
>           <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} -
> %msg%n</pattern>
>         </layout>
>       </appender>
>     </sift>
>   </appender>
>
>   <root level="info">
>     <appender-ref ref="SIFT" />
>   </root>
> </configuration>
>
>
> An abstract wrapper implementation of JobExecutor sets up the MDC to
> conform to what's expected in the logback.xml, and provides a few
> other niceties.  Replace the SlingResourceProvider/StorageNode stuff
> with whatever you use to access files and Sling nodes; it's just a
> simple abstraction layer that we happen to be using.
>
>
> package com.xyz.content.sling.processor;
>
> import java.io.BufferedReader;
> import java.io.IOException;
> import java.nio.charset.StandardCharsets;
> import java.nio.file.Files;
> import java.nio.file.Path;
> import java.nio.file.Paths;
>
> import org.apache.sling.api.resource.LoginException;
> import org.apache.sling.api.resource.ResourceResolver;
> import org.apache.sling.api.resource.ResourceResolverFactory;
> import org.apache.sling.event.jobs.Job;
> import org.apache.sling.event.jobs.consumer.JobExecutionContext;
> import org.apache.sling.event.jobs.consumer.JobExecutionResult;
> import org.apache.sling.event.jobs.consumer.JobExecutor;
> import org.slf4j.Logger;
> import org.slf4j.LoggerFactory;
> import org.slf4j.MDC;
>
> import com.xyz.content.sling.storage.SlingResourceProvider;
> import com.xyz.storage.StorageNode;
>
> /**
>  * Sling job executor with build job logging.
>  *
>  * This class wraps build processing with job-specific log
> management.
>  * Job submitters need to configure the following job properties
>  * prior to submitting a job:
>  * * jobId - A symbolic identifier for the job.  This value must be
>  *           unique throughout the life of the job.  Subclasses
>  *           may use this value as a job name for submitting
>  *           and monitoring SLURM jobs.
>  * * logPath - The pathname to the build log file.
>  * * damLogPath - If specified, the DAM resource to which
>  *                the log output should be copied upon build
> completion.
>  * * clearLog - An option boolean parameter which, when true,
>  *              removes the build log file if it exists prior
>  *              to commencing the build.
>  *
>  * @author john
>  *
>  */
> public abstract class BuildJobExecutor implements JobExecutor {
>     private static Logger LOG =
> LoggerFactory.getLogger(BuildJobExecutor.class);
>
>     /**
>      * Retrieve a resource resolver factory for build processing.
>      *
>      * @return  the resource resolver factory
>      */
>     protected abstract ResourceResolverFactory getResolverFactory();
>
>     /**
>      * Subclass-specific build processing method.
>      *
>      * @param job
>      * @param context
>      * @param resolver
>      * @return  the result of build processing
>      */
>     protected abstract JobExecutionResult build(Job job,
> JobExecutionContext context, ResourceResolver resolver);
>
>     @Override
>     public JobExecutionResult process(Job job, JobExecutionContext
> context) {
>         //
>         //  Prepare for the log directory and file.
>         //
>         final String jobId = job.getProperty("jobId", String.class);
>         final Path logPath = Paths.get(job.getProperty("logPath",
> String.class));
>         final Path logParentPath = logPath.getParent();
>         if (logParentPath != null) {
>             try {
>                 Files.createDirectories(logParentPath);
>             }
>             catch (final IOException e) {
>                 return handleError(context, "Unable to create log
> directory " + logParentPath);
>             }
>         }
>
>         if (Boolean.TRUE.equals(job.getProperty("resetLog"))) {
>             try {
>                 Files.deleteIfExists(logPath);
>             }
>             catch (final IOException e) {
>                 return handleError(context, "Unable to clear log file
> " + logPath);
>             }
>         }
>
>         LOG.info("Starting build job with ID " + jobId);
>         ResourceResolver resolver;
>         try {
>             resolver =
> getResolverFactory().getServiceResourceResolver(null);
>         }
>         catch (final LoginException e) {
>             return handleError(context, "Unable get build job
> resource resolver, check cbservice user configuration.");
>         }
>
>         //
>         //  Perform the build operation.  Logging to the job-specific
> log file starts here.
>         //
>         MDC.put("jobId", jobId);
>         MDC.put("logPath", logPath.toString());
>         try {
>             final JobExecutionResult result = build(job, context,
> resolver);
>             if (job.getProperty("damLogPath") != null) {
>                 final Path damLogPath =
> Paths.get(job.getProperty("damLogPath", String.class));
>                 saveLog(resolver, logPath, damLogPath);
>             }
>             return result;
>         }
>         catch (final Throwable e) {
>             LOG.error("Build job failed with an exception.", e);
>             return handleError(context, "Build job failed with an
> exception: " + e.getMessage());
>         }
>         finally {
>             MDC.remove("jobId");
>             MDC.remove("logPath");
>             resolver.close();
>         }
>     }
>
>     private JobExecutionResult handleError(JobExecutionContext
> context, String message) {
>         LOG.error(message);
>         return context.result().message(message).cancelled();
>     }
>
>     private void saveLog(ResourceResolver resolver, Path logPath,
> Path damLogPath) {
>         try (BufferedReader reader = Files.newBufferedReader(logPath,
> StandardCharsets.UTF_8)) {
>             final SlingResourceProvider storageProvider = new
> SlingResourceProvider();
>             storageProvider.setResolver(resolver);
>             final StorageNode damLogNode =
> storageProvider.get(damLogPath);
>             damLogNode.copyFromReader(reader,
> StandardCharsets.UTF_8);
>             resolver.commit();
>         }
>         catch (final IOException e) {
>             LOG.error("Unable to move build log " + logPath + " to
> DAM resource " + damLogPath, e);
>         }
>     }
> }
>
>
>
> ________________________________
> From: Robert Munteanu <[hidden email]>
> Sent: Friday, September 8, 2017 1:13:02 AM
> To: [hidden email]
> Subject: Re: Directing Sling job logging output to separate files?
>
> Hi John,
>
> On Fri, 2017-09-08 at 05:28 +0000, John Logan wrote:
> > Hi,
> >
> >
> > I'm using the Sling job manager to handle some long running tasks,
> > and would like to direct the log output for each job to its own
> > file
> > at a job-specific path.  Is there a straightforward way to achieve
> > this?
>
> If your jobs use separate loggers, you can achieve that either by:
>
> - manually creating loggers and appenders via http://localhost:8080/s
> ys
> tem/console/slinglog/
> - adding specific loggers/appenders to the provisioning model
>
> There might be a way of adding those at runtime using the logback
> APIs,
> but I haven't tried it before.
>
> Robert

Reply | Threaded
Open this post in threaded view
|

Re: Directing Sling job logging output to separate files?

John Logan
In reply to this post by chetan mehrotra
Thanks, Chetan.  Actually, I ran into a problem where I need to create a subclass of SiftingAppender to ensure that the appender is closed at the end of a log session, so it looks like configuring the logging via OSGI instead of a custom logback.xml might make more sense.


From looking at the Sling Logging page and WebConsoleTestActivator.java from the Sling source as a starting point, it looks like would register my custom appender and filter like so in the BundleActivator.start() method:


        final Dictionary<String, Object> appenderProps = new Hashtable<String, Object>();
        final String[] loggers = { "ROOT" };
        appenderProps.put("loggers", loggers);
        context.registerService(Appender.class.getName(), new CloseableSiftingAppender(), appenderProps);

        final Dictionary<String, Object> filterProps = new Hashtable<String, Object>();
        filterProps.put("appenders", "CloseableSiftingAppender");
        context.registerService(Filter.class.getName(), new RemoveUndirectedMessageFilter(), filterProps);

From there I have a couple of questions:

1.) How do I provide the XML configuration to my sifting appender?  Do I just define a ConfigProvider that provides an InputSource with the XML, and register that like:

        context.registerService(ConfigProvider.class.getName(), new MyConfigProvider(), null);

2.) If I need to filter what is logged to error.log by the default Sling Logging configuration (I want to ensure that the messages handled by the sifting appender are not also logged to error.log), what is the best way to to it?

John



________________________________
From: Chetan Mehrotra <[hidden email]>
Sent: Thursday, September 14, 2017 1:23:45 AM
To: [hidden email]
Subject: Re: Directing Sling job logging output to separate files?

While providing a custom logback.xml do remember to add Sling specific
handlers [1]. Otherwise OSGi integration of logging config would not
work as expected

Chetan Mehrotra
[1] https://sling.apache.org/documentation/development/logging.html#external-config-file

On Thu, Sep 14, 2017 at 6:45 AM, John Logan <[hidden email]> wrote:

> I got this working and thought I'd follow up with what I did in case anyone else needs this sort of thing.
>
>
> I used SiftingAppender pretty much as shown in any of the examples one can find online.  I put the following logback.xml in my sling.home, and pointed the OSGI configuration variable org.apache.sling.commons.log.configurationFile to it:
>
>
> <configuration>
>   <appender name="SIFT" class="ch.qos.logback.classic.sift.SiftingAppender">
>     <discriminator>
>       <key>jobId</key>
>       <defaultValue>/var/log/sling/error.log</defaultValue>
>     </discriminator>
>     <sift>
>       <appender name="FILE-${jobId}" class="ch.qos.logback.core.FileAppender">
>         <file>${logPath}</file>
>         <layout class="ch.qos.logback.classic.PatternLayout">
>           <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
>         </layout>
>       </appender>
>     </sift>
>   </appender>
>
>   <root level="info">
>     <appender-ref ref="SIFT" />
>   </root>
> </configuration>
>
>
> An abstract wrapper implementation of JobExecutor sets up the MDC to conform to what's expected in the logback.xml, and provides a few other niceties.  Replace the SlingResourceProvider/StorageNode stuff with whatever you use to access files and Sling nodes; it's just a simple abstraction layer that we happen to be using.
>
>
> package com.xyz.content.sling.processor;
>
> import java.io.BufferedReader;
> import java.io.IOException;
> import java.nio.charset.StandardCharsets;
> import java.nio.file.Files;
> import java.nio.file.Path;
> import java.nio.file.Paths;
>
> import org.apache.sling.api.resource.LoginException;
> import org.apache.sling.api.resource.ResourceResolver;
> import org.apache.sling.api.resource.ResourceResolverFactory;
> import org.apache.sling.event.jobs.Job;
> import org.apache.sling.event.jobs.consumer.JobExecutionContext;
> import org.apache.sling.event.jobs.consumer.JobExecutionResult;
> import org.apache.sling.event.jobs.consumer.JobExecutor;
> import org.slf4j.Logger;
> import org.slf4j.LoggerFactory;
> import org.slf4j.MDC;
>
> import com.xyz.content.sling.storage.SlingResourceProvider;
> import com.xyz.storage.StorageNode;
>
> /**
>  * Sling job executor with build job logging.
>  *
>  * This class wraps build processing with job-specific log management.
>  * Job submitters need to configure the following job properties
>  * prior to submitting a job:
>  * * jobId - A symbolic identifier for the job.  This value must be
>  *           unique throughout the life of the job.  Subclasses
>  *           may use this value as a job name for submitting
>  *           and monitoring SLURM jobs.
>  * * logPath - The pathname to the build log file.
>  * * damLogPath - If specified, the DAM resource to which
>  *                the log output should be copied upon build completion.
>  * * clearLog - An option boolean parameter which, when true,
>  *              removes the build log file if it exists prior
>  *              to commencing the build.
>  *
>  * @author john
>  *
>  */
> public abstract class BuildJobExecutor implements JobExecutor {
>     private static Logger LOG = LoggerFactory.getLogger(BuildJobExecutor.class);
>
>     /**
>      * Retrieve a resource resolver factory for build processing.
>      *
>      * @return  the resource resolver factory
>      */
>     protected abstract ResourceResolverFactory getResolverFactory();
>
>     /**
>      * Subclass-specific build processing method.
>      *
>      * @param job
>      * @param context
>      * @param resolver
>      * @return  the result of build processing
>      */
>     protected abstract JobExecutionResult build(Job job, JobExecutionContext context, ResourceResolver resolver);
>
>     @Override
>     public JobExecutionResult process(Job job, JobExecutionContext context) {
>         //
>         //  Prepare for the log directory and file.
>         //
>         final String jobId = job.getProperty("jobId", String.class);
>         final Path logPath = Paths.get(job.getProperty("logPath", String.class));
>         final Path logParentPath = logPath.getParent();
>         if (logParentPath != null) {
>             try {
>                 Files.createDirectories(logParentPath);
>             }
>             catch (final IOException e) {
>                 return handleError(context, "Unable to create log directory " + logParentPath);
>             }
>         }
>
>         if (Boolean.TRUE.equals(job.getProperty("resetLog"))) {
>             try {
>                 Files.deleteIfExists(logPath);
>             }
>             catch (final IOException e) {
>                 return handleError(context, "Unable to clear log file " + logPath);
>             }
>         }
>
>         LOG.info("Starting build job with ID " + jobId);
>         ResourceResolver resolver;
>         try {
>             resolver = getResolverFactory().getServiceResourceResolver(null);
>         }
>         catch (final LoginException e) {
>             return handleError(context, "Unable get build job resource resolver, check cbservice user configuration.");
>         }
>
>         //
>         //  Perform the build operation.  Logging to the job-specific log file starts here.
>         //
>         MDC.put("jobId", jobId);
>         MDC.put("logPath", logPath.toString());
>         try {
>             final JobExecutionResult result = build(job, context, resolver);
>             if (job.getProperty("damLogPath") != null) {
>                 final Path damLogPath = Paths.get(job.getProperty("damLogPath", String.class));
>                 saveLog(resolver, logPath, damLogPath);
>             }
>             return result;
>         }
>         catch (final Throwable e) {
>             LOG.error("Build job failed with an exception.", e);
>             return handleError(context, "Build job failed with an exception: " + e.getMessage());
>         }
>         finally {
>             MDC.remove("jobId");
>             MDC.remove("logPath");
>             resolver.close();
>         }
>     }
>
>     private JobExecutionResult handleError(JobExecutionContext context, String message) {
>         LOG.error(message);
>         return context.result().message(message).cancelled();
>     }
>
>     private void saveLog(ResourceResolver resolver, Path logPath, Path damLogPath) {
>         try (BufferedReader reader = Files.newBufferedReader(logPath, StandardCharsets.UTF_8)) {
>             final SlingResourceProvider storageProvider = new SlingResourceProvider();
>             storageProvider.setResolver(resolver);
>             final StorageNode damLogNode = storageProvider.get(damLogPath);
>             damLogNode.copyFromReader(reader, StandardCharsets.UTF_8);
>             resolver.commit();
>         }
>         catch (final IOException e) {
>             LOG.error("Unable to move build log " + logPath + " to DAM resource " + damLogPath, e);
>         }
>     }
> }
>
>
>
> ________________________________
> From: Robert Munteanu <[hidden email]>
> Sent: Friday, September 8, 2017 1:13:02 AM
> To: [hidden email]
> Subject: Re: Directing Sling job logging output to separate files?
>
> Hi John,
>
> On Fri, 2017-09-08 at 05:28 +0000, John Logan wrote:
>> Hi,
>>
>>
>> I'm using the Sling job manager to handle some long running tasks,
>> and would like to direct the log output for each job to its own file
>> at a job-specific path.  Is there a straightforward way to achieve
>> this?
>
> If your jobs use separate loggers, you can achieve that either by:
>
> - manually creating loggers and appenders via http://localhost:8080/sys
> tem/console/slinglog/
> - adding specific loggers/appenders to the provisioning model
>
> There might be a way of adding those at runtime using the logback APIs,
> but I haven't tried it before.
>
> Robert
Reply | Threaded
Open this post in threaded view
|

Re: Directing Sling job logging output to separate files?

chetan mehrotra
> How do I provide the XML configuration to my sifting appender?  Do I just define a ConfigProvider that provides an InputSource with the XML, and register that like:

So far the usage pattern is that you register fully configured
Appenders as OSGi service. The xml config support is more like
stitching of logback config. So can try the way you are approaching
i.e. register the ConfigProvider and see if it works

> 2.) If I need to filter what is logged to error.log by the default Sling Logging configuration (I want to ensure that the messages handled by the sifting appender are not also logged to error.log), what is the best way to to it?

Register the filter as OSGi service with appenders="*" [1]
Chetan Mehrotra
[1] https://sling.apache.org/documentation/development/logging.html#filters-as-osgi-services

On Fri, Sep 15, 2017 at 11:20 PM, John Logan <[hidden email]> wrote:

> Thanks, Chetan.  Actually, I ran into a problem where I need to create a subclass of SiftingAppender to ensure that the appender is closed at the end of a log session, so it looks like configuring the logging via OSGI instead of a custom logback.xml might make more sense.
>
>
> From looking at the Sling Logging page and WebConsoleTestActivator.java from the Sling source as a starting point, it looks like would register my custom appender and filter like so in the BundleActivator.start() method:
>
>
>         final Dictionary<String, Object> appenderProps = new Hashtable<String, Object>();
>         final String[] loggers = { "ROOT" };
>         appenderProps.put("loggers", loggers);
>         context.registerService(Appender.class.getName(), new CloseableSiftingAppender(), appenderProps);
>
>         final Dictionary<String, Object> filterProps = new Hashtable<String, Object>();
>         filterProps.put("appenders", "CloseableSiftingAppender");
>         context.registerService(Filter.class.getName(), new RemoveUndirectedMessageFilter(), filterProps);
>
> From there I have a couple of questions:
>
> 1.) How do I provide the XML configuration to my sifting appender?  Do I just define a ConfigProvider that provides an InputSource with the XML, and register that like:
>
>         context.registerService(ConfigProvider.class.getName(), new MyConfigProvider(), null);
>
> 2.) If I need to filter what is logged to error.log by the default Sling Logging configuration (I want to ensure that the messages handled by the sifting appender are not also logged to error.log), what is the best way to to it?
>
> John
>
>
>
> ________________________________
> From: Chetan Mehrotra <[hidden email]>
> Sent: Thursday, September 14, 2017 1:23:45 AM
> To: [hidden email]
> Subject: Re: Directing Sling job logging output to separate files?
>
> While providing a custom logback.xml do remember to add Sling specific
> handlers [1]. Otherwise OSGi integration of logging config would not
> work as expected
>
> Chetan Mehrotra
> [1] https://sling.apache.org/documentation/development/logging.html#external-config-file
>
> On Thu, Sep 14, 2017 at 6:45 AM, John Logan <[hidden email]> wrote:
>> I got this working and thought I'd follow up with what I did in case anyone else needs this sort of thing.
>>
>>
>> I used SiftingAppender pretty much as shown in any of the examples one can find online.  I put the following logback.xml in my sling.home, and pointed the OSGI configuration variable org.apache.sling.commons.log.configurationFile to it:
>>
>>
>> <configuration>
>>   <appender name="SIFT" class="ch.qos.logback.classic.sift.SiftingAppender">
>>     <discriminator>
>>       <key>jobId</key>
>>       <defaultValue>/var/log/sling/error.log</defaultValue>
>>     </discriminator>
>>     <sift>
>>       <appender name="FILE-${jobId}" class="ch.qos.logback.core.FileAppender">
>>         <file>${logPath}</file>
>>         <layout class="ch.qos.logback.classic.PatternLayout">
>>           <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
>>         </layout>
>>       </appender>
>>     </sift>
>>   </appender>
>>
>>   <root level="info">
>>     <appender-ref ref="SIFT" />
>>   </root>
>> </configuration>
>>
>>
>> An abstract wrapper implementation of JobExecutor sets up the MDC to conform to what's expected in the logback.xml, and provides a few other niceties.  Replace the SlingResourceProvider/StorageNode stuff with whatever you use to access files and Sling nodes; it's just a simple abstraction layer that we happen to be using.
>>
>>
>> package com.xyz.content.sling.processor;
>>
>> import java.io.BufferedReader;
>> import java.io.IOException;
>> import java.nio.charset.StandardCharsets;
>> import java.nio.file.Files;
>> import java.nio.file.Path;
>> import java.nio.file.Paths;
>>
>> import org.apache.sling.api.resource.LoginException;
>> import org.apache.sling.api.resource.ResourceResolver;
>> import org.apache.sling.api.resource.ResourceResolverFactory;
>> import org.apache.sling.event.jobs.Job;
>> import org.apache.sling.event.jobs.consumer.JobExecutionContext;
>> import org.apache.sling.event.jobs.consumer.JobExecutionResult;
>> import org.apache.sling.event.jobs.consumer.JobExecutor;
>> import org.slf4j.Logger;
>> import org.slf4j.LoggerFactory;
>> import org.slf4j.MDC;
>>
>> import com.xyz.content.sling.storage.SlingResourceProvider;
>> import com.xyz.storage.StorageNode;
>>
>> /**
>>  * Sling job executor with build job logging.
>>  *
>>  * This class wraps build processing with job-specific log management.
>>  * Job submitters need to configure the following job properties
>>  * prior to submitting a job:
>>  * * jobId - A symbolic identifier for the job.  This value must be
>>  *           unique throughout the life of the job.  Subclasses
>>  *           may use this value as a job name for submitting
>>  *           and monitoring SLURM jobs.
>>  * * logPath - The pathname to the build log file.
>>  * * damLogPath - If specified, the DAM resource to which
>>  *                the log output should be copied upon build completion.
>>  * * clearLog - An option boolean parameter which, when true,
>>  *              removes the build log file if it exists prior
>>  *              to commencing the build.
>>  *
>>  * @author john
>>  *
>>  */
>> public abstract class BuildJobExecutor implements JobExecutor {
>>     private static Logger LOG = LoggerFactory.getLogger(BuildJobExecutor.class);
>>
>>     /**
>>      * Retrieve a resource resolver factory for build processing.
>>      *
>>      * @return  the resource resolver factory
>>      */
>>     protected abstract ResourceResolverFactory getResolverFactory();
>>
>>     /**
>>      * Subclass-specific build processing method.
>>      *
>>      * @param job
>>      * @param context
>>      * @param resolver
>>      * @return  the result of build processing
>>      */
>>     protected abstract JobExecutionResult build(Job job, JobExecutionContext context, ResourceResolver resolver);
>>
>>     @Override
>>     public JobExecutionResult process(Job job, JobExecutionContext context) {
>>         //
>>         //  Prepare for the log directory and file.
>>         //
>>         final String jobId = job.getProperty("jobId", String.class);
>>         final Path logPath = Paths.get(job.getProperty("logPath", String.class));
>>         final Path logParentPath = logPath.getParent();
>>         if (logParentPath != null) {
>>             try {
>>                 Files.createDirectories(logParentPath);
>>             }
>>             catch (final IOException e) {
>>                 return handleError(context, "Unable to create log directory " + logParentPath);
>>             }
>>         }
>>
>>         if (Boolean.TRUE.equals(job.getProperty("resetLog"))) {
>>             try {
>>                 Files.deleteIfExists(logPath);
>>             }
>>             catch (final IOException e) {
>>                 return handleError(context, "Unable to clear log file " + logPath);
>>             }
>>         }
>>
>>         LOG.info("Starting build job with ID " + jobId);
>>         ResourceResolver resolver;
>>         try {
>>             resolver = getResolverFactory().getServiceResourceResolver(null);
>>         }
>>         catch (final LoginException e) {
>>             return handleError(context, "Unable get build job resource resolver, check cbservice user configuration.");
>>         }
>>
>>         //
>>         //  Perform the build operation.  Logging to the job-specific log file starts here.
>>         //
>>         MDC.put("jobId", jobId);
>>         MDC.put("logPath", logPath.toString());
>>         try {
>>             final JobExecutionResult result = build(job, context, resolver);
>>             if (job.getProperty("damLogPath") != null) {
>>                 final Path damLogPath = Paths.get(job.getProperty("damLogPath", String.class));
>>                 saveLog(resolver, logPath, damLogPath);
>>             }
>>             return result;
>>         }
>>         catch (final Throwable e) {
>>             LOG.error("Build job failed with an exception.", e);
>>             return handleError(context, "Build job failed with an exception: " + e.getMessage());
>>         }
>>         finally {
>>             MDC.remove("jobId");
>>             MDC.remove("logPath");
>>             resolver.close();
>>         }
>>     }
>>
>>     private JobExecutionResult handleError(JobExecutionContext context, String message) {
>>         LOG.error(message);
>>         return context.result().message(message).cancelled();
>>     }
>>
>>     private void saveLog(ResourceResolver resolver, Path logPath, Path damLogPath) {
>>         try (BufferedReader reader = Files.newBufferedReader(logPath, StandardCharsets.UTF_8)) {
>>             final SlingResourceProvider storageProvider = new SlingResourceProvider();
>>             storageProvider.setResolver(resolver);
>>             final StorageNode damLogNode = storageProvider.get(damLogPath);
>>             damLogNode.copyFromReader(reader, StandardCharsets.UTF_8);
>>             resolver.commit();
>>         }
>>         catch (final IOException e) {
>>             LOG.error("Unable to move build log " + logPath + " to DAM resource " + damLogPath, e);
>>         }
>>     }
>> }
>>
>>
>>
>> ________________________________
>> From: Robert Munteanu <[hidden email]>
>> Sent: Friday, September 8, 2017 1:13:02 AM
>> To: [hidden email]
>> Subject: Re: Directing Sling job logging output to separate files?
>>
>> Hi John,
>>
>> On Fri, 2017-09-08 at 05:28 +0000, John Logan wrote:
>>> Hi,
>>>
>>>
>>> I'm using the Sling job manager to handle some long running tasks,
>>> and would like to direct the log output for each job to its own file
>>> at a job-specific path.  Is there a straightforward way to achieve
>>> this?
>>
>> If your jobs use separate loggers, you can achieve that either by:
>>
>> - manually creating loggers and appenders via http://localhost:8080/sys
>> tem/console/slinglog/
>> - adding specific loggers/appenders to the provisioning model
>>
>> There might be a way of adding those at runtime using the logback APIs,
>> but I haven't tried it before.
>>
>> Robert
Reply | Threaded
Open this post in threaded view
|

Re: Directing Sling job logging output to separate files?

John Logan
Hi Chetan, thanks again for your help; I have now configured logging to direct the output of jobs to individual files so that the lifecycle of these logs can be managed separately from the lifecycle of the general log.


In case anyone else needs to do something like this, here's the recipe that I used.


To prevent job log messages from appearing in the general log, I first registered a filter with appenders="*" in my bundle activator start() method as you suggested.  This filter denies messages received while the MDC is configured for build logging.


Next in the start() method, I configured the SiftingAppender that handles the job logs.  I did this programmatically [1] as I didn't see a good way to do this via a ConfigProvider:

- Retrieve the logging context

- Create an instance of a SiftingAppender subclass that closes file appenders when a FINALIZE_SESSION marker is detected [2].

- Create an instance of MDCBasedDiscriminator and configure the SiftingAppender with it.

- Set the AppenderFactory for the sifting appender to an anonymous subclass that creates FileAppender instances using an MDC parameter as the log path.  To work around a bug [3] in the logback 1.1.7 version used by Sling 9, the AppenderFactory does not set the appender name.

- Add a filter to the SiftingAppender that denies messages received while the MDC is not configured for build logging.


Finally in the start method, I called bundleContext.registerService() to register the appender and associate it with the logger list { "ROOT" }.


[1] https://gist.github.com/kazimsarikaya/8645769 was a great template for this.

[2] https://jira.qos.ch/browse/LOGBACK-1066 summarizes the workarounds that can be used to get SiftingAppender to close log files promptly.

[3] https://jira.qos.ch/browse/LOGBACK-1167 describes a bug in logback 1.1.6 and 1.1.7 with a feature designed to prevent multiple FileAppenders from writing to the same log file path.  In my Sling app, this means that logging fails to work when I try to run a job for a given item a second time. A look at the source code showed that this tracking feature can be bypassed by leaving the name null when creating a FileAppender.   The bug is fixed in Sling 1.1.9.


jgl


________________________________
From: Chetan Mehrotra <[hidden email]>
Sent: Sunday, September 17, 2017 12:23:41 AM
To: [hidden email]
Subject: Re: Directing Sling job logging output to separate files?

> How do I provide the XML configuration to my sifting appender?  Do I just define a ConfigProvider that provides an InputSource with the XML, and register that like:

So far the usage pattern is that you register fully configured
Appenders as OSGi service. The xml config support is more like
stitching of logback config. So can try the way you are approaching
i.e. register the ConfigProvider and see if it works

> 2.) If I need to filter what is logged to error.log by the default Sling Logging configuration (I want to ensure that the messages handled by the sifting appender are not also logged to error.log), what is the best way to to it?

Register the filter as OSGi service with appenders="*" [1]
Chetan Mehrotra
[1] https://sling.apache.org/documentation/development/logging.html#filters-as-osgi-services

On Fri, Sep 15, 2017 at 11:20 PM, John Logan <[hidden email]> wrote:

> Thanks, Chetan.  Actually, I ran into a problem where I need to create a subclass of SiftingAppender to ensure that the appender is closed at the end of a log session, so it looks like configuring the logging via OSGI instead of a custom logback.xml might make more sense.
>
>
> From looking at the Sling Logging page and WebConsoleTestActivator.java from the Sling source as a starting point, it looks like would register my custom appender and filter like so in the BundleActivator.start() method:
>
>
>         final Dictionary<String, Object> appenderProps = new Hashtable<String, Object>();
>         final String[] loggers = { "ROOT" };
>         appenderProps.put("loggers", loggers);
>         context.registerService(Appender.class.getName(), new CloseableSiftingAppender(), appenderProps);
>
>         final Dictionary<String, Object> filterProps = new Hashtable<String, Object>();
>         filterProps.put("appenders", "CloseableSiftingAppender");
>         context.registerService(Filter.class.getName(), new RemoveUndirectedMessageFilter(), filterProps);
>
> From there I have a couple of questions:
>
> 1.) How do I provide the XML configuration to my sifting appender?  Do I just define a ConfigProvider that provides an InputSource with the XML, and register that like:
>
>         context.registerService(ConfigProvider.class.getName(), new MyConfigProvider(), null);
>
> 2.) If I need to filter what is logged to error.log by the default Sling Logging configuration (I want to ensure that the messages handled by the sifting appender are not also logged to error.log), what is the best way to to it?
>
> John
>
>
>
> ________________________________
> From: Chetan Mehrotra <[hidden email]>
> Sent: Thursday, September 14, 2017 1:23:45 AM
> To: [hidden email]
> Subject: Re: Directing Sling job logging output to separate files?
>
> While providing a custom logback.xml do remember to add Sling specific
> handlers [1]. Otherwise OSGi integration of logging config would not
> work as expected
>
> Chetan Mehrotra
> [1] https://sling.apache.org/documentation/development/logging.html#external-config-file
>
> On Thu, Sep 14, 2017 at 6:45 AM, John Logan <[hidden email]> wrote:
>> I got this working and thought I'd follow up with what I did in case anyone else needs this sort of thing.
>>
>>
>> I used SiftingAppender pretty much as shown in any of the examples one can find online.  I put the following logback.xml in my sling.home, and pointed the OSGI configuration variable org.apache.sling.commons.log.configurationFile to it:
>>
>>
>> <configuration>
>>   <appender name="SIFT" class="ch.qos.logback.classic.sift.SiftingAppender">
>>     <discriminator>
>>       <key>jobId</key>
>>       <defaultValue>/var/log/sling/error.log</defaultValue>
>>     </discriminator>
>>     <sift>
>>       <appender name="FILE-${jobId}" class="ch.qos.logback.core.FileAppender">
>>         <file>${logPath}</file>
>>         <layout class="ch.qos.logback.classic.PatternLayout">
>>           <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
>>         </layout>
>>       </appender>
>>     </sift>
>>   </appender>
>>
>>   <root level="info">
>>     <appender-ref ref="SIFT" />
>>   </root>
>> </configuration>
>>
>>
>> An abstract wrapper implementation of JobExecutor sets up the MDC to conform to what's expected in the logback.xml, and provides a few other niceties.  Replace the SlingResourceProvider/StorageNode stuff with whatever you use to access files and Sling nodes; it's just a simple abstraction layer that we happen to be using.
>>
>>
>> package com.xyz.content.sling.processor;
>>
>> import java.io.BufferedReader;
>> import java.io.IOException;
>> import java.nio.charset.StandardCharsets;
>> import java.nio.file.Files;
>> import java.nio.file.Path;
>> import java.nio.file.Paths;
>>
>> import org.apache.sling.api.resource.LoginException;
>> import org.apache.sling.api.resource.ResourceResolver;
>> import org.apache.sling.api.resource.ResourceResolverFactory;
>> import org.apache.sling.event.jobs.Job;
>> import org.apache.sling.event.jobs.consumer.JobExecutionContext;
>> import org.apache.sling.event.jobs.consumer.JobExecutionResult;
>> import org.apache.sling.event.jobs.consumer.JobExecutor;
>> import org.slf4j.Logger;
>> import org.slf4j.LoggerFactory;
>> import org.slf4j.MDC;
>>
>> import com.xyz.content.sling.storage.SlingResourceProvider;
>> import com.xyz.storage.StorageNode;
>>
>> /**
>>  * Sling job executor with build job logging.
>>  *
>>  * This class wraps build processing with job-specific log management.
>>  * Job submitters need to configure the following job properties
>>  * prior to submitting a job:
>>  * * jobId - A symbolic identifier for the job.  This value must be
>>  *           unique throughout the life of the job.  Subclasses
>>  *           may use this value as a job name for submitting
>>  *           and monitoring SLURM jobs.
>>  * * logPath - The pathname to the build log file.
>>  * * damLogPath - If specified, the DAM resource to which
>>  *                the log output should be copied upon build completion.
>>  * * clearLog - An option boolean parameter which, when true,
>>  *              removes the build log file if it exists prior
>>  *              to commencing the build.
>>  *
>>  * @author john
>>  *
>>  */
>> public abstract class BuildJobExecutor implements JobExecutor {
>>     private static Logger LOG = LoggerFactory.getLogger(BuildJobExecutor.class);
>>
>>     /**
>>      * Retrieve a resource resolver factory for build processing.
>>      *
>>      * @return  the resource resolver factory
>>      */
>>     protected abstract ResourceResolverFactory getResolverFactory();
>>
>>     /**
>>      * Subclass-specific build processing method.
>>      *
>>      * @param job
>>      * @param context
>>      * @param resolver
>>      * @return  the result of build processing
>>      */
>>     protected abstract JobExecutionResult build(Job job, JobExecutionContext context, ResourceResolver resolver);
>>
>>     @Override
>>     public JobExecutionResult process(Job job, JobExecutionContext context) {
>>         //
>>         //  Prepare for the log directory and file.
>>         //
>>         final String jobId = job.getProperty("jobId", String.class);
>>         final Path logPath = Paths.get(job.getProperty("logPath", String.class));
>>         final Path logParentPath = logPath.getParent();
>>         if (logParentPath != null) {
>>             try {
>>                 Files.createDirectories(logParentPath);
>>             }
>>             catch (final IOException e) {
>>                 return handleError(context, "Unable to create log directory " + logParentPath);
>>             }
>>         }
>>
>>         if (Boolean.TRUE.equals(job.getProperty("resetLog"))) {
>>             try {
>>                 Files.deleteIfExists(logPath);
>>             }
>>             catch (final IOException e) {
>>                 return handleError(context, "Unable to clear log file " + logPath);
>>             }
>>         }
>>
>>         LOG.info("Starting build job with ID " + jobId);
>>         ResourceResolver resolver;
>>         try {
>>             resolver = getResolverFactory().getServiceResourceResolver(null);
>>         }
>>         catch (final LoginException e) {
>>             return handleError(context, "Unable get build job resource resolver, check cbservice user configuration.");
>>         }
>>
>>         //
>>         //  Perform the build operation.  Logging to the job-specific log file starts here.
>>         //
>>         MDC.put("jobId", jobId);
>>         MDC.put("logPath", logPath.toString());
>>         try {
>>             final JobExecutionResult result = build(job, context, resolver);
>>             if (job.getProperty("damLogPath") != null) {
>>                 final Path damLogPath = Paths.get(job.getProperty("damLogPath", String.class));
>>                 saveLog(resolver, logPath, damLogPath);
>>             }
>>             return result;
>>         }
>>         catch (final Throwable e) {
>>             LOG.error("Build job failed with an exception.", e);
>>             return handleError(context, "Build job failed with an exception: " + e.getMessage());
>>         }
>>         finally {
>>             MDC.remove("jobId");
>>             MDC.remove("logPath");
>>             resolver.close();
>>         }
>>     }
>>
>>     private JobExecutionResult handleError(JobExecutionContext context, String message) {
>>         LOG.error(message);
>>         return context.result().message(message).cancelled();
>>     }
>>
>>     private void saveLog(ResourceResolver resolver, Path logPath, Path damLogPath) {
>>         try (BufferedReader reader = Files.newBufferedReader(logPath, StandardCharsets.UTF_8)) {
>>             final SlingResourceProvider storageProvider = new SlingResourceProvider();
>>             storageProvider.setResolver(resolver);
>>             final StorageNode damLogNode = storageProvider.get(damLogPath);
>>             damLogNode.copyFromReader(reader, StandardCharsets.UTF_8);
>>             resolver.commit();
>>         }
>>         catch (final IOException e) {
>>             LOG.error("Unable to move build log " + logPath + " to DAM resource " + damLogPath, e);
>>         }
>>     }
>> }
>>
>>
>>
>> ________________________________
>> From: Robert Munteanu <[hidden email]>
>> Sent: Friday, September 8, 2017 1:13:02 AM
>> To: [hidden email]
>> Subject: Re: Directing Sling job logging output to separate files?
>>
>> Hi John,
>>
>> On Fri, 2017-09-08 at 05:28 +0000, John Logan wrote:
>>> Hi,
>>>
>>>
>>> I'm using the Sling job manager to handle some long running tasks,
>>> and would like to direct the log output for each job to its own file
>>> at a job-specific path.  Is there a straightforward way to achieve
>>> this?
>>
>> If your jobs use separate loggers, you can achieve that either by:
>>
>> - manually creating loggers and appenders via http://localhost:8080/sys
>> tem/console/slinglog/
>> - adding specific loggers/appenders to the provisioning model
>>
>> There might be a way of adding those at runtime using the logback APIs,
>> but I haven't tried it before.
>>
>> Robert