Sunday, May 26, 2013

Understanding the WebSphere config service

I just finished an article that describes some internals of the WebSphere configuration service. In particular it explains in detail how configuration objects (as seen by wsadmin) relate to XML documents in the configuration repository. The full article can be found here:

http://www.scribd.com/doc/143797650/Understanding-the-WebSphere-config-service

Publishing releases using Github, Bintray and maven-release-plugin

Project setup

  1. Add an SCM section to your POM so that maven-release-plugin can create a tag during the release process:

    <scm>
      <connection>scm:git:https://github.com/github-user/github-repo.git</connection>
      <developerConnection>scm:git:git@github.com:github-user/github-repo.git</developerConnection>
      <url>https://github.com/github-user/github-repo</url>
      <tag>HEAD</tag>
    </scm>
    

    The developer connection uses the SSH protocol, so that no password authentication is required. Note that this assumes that you uploaded your public key to Github.

  2. Go to the Bintray Web site and create a new package in your repository.

  3. Add a distribution management section to your POM with a reference to that package:

    <distributionManagement>
      <repository>
        <id>bintray</id>
        <url>https://api.bintray.com/maven/bintray-user/bintray-repo/package</url>
      </repository>
    </distributionManagement>
    
  4. Add your Bintray credentials to settings.xml. Note that Bintray REST API uses API keys instead of passwords:

    <server>
      <id>bintray</id>
      <username>bintray-user</username>
      <password>bintray-api-key</password>
    </server>
    
  5. Add the release plugin to your POM:

    <build>
      <plugins>
        <plugin>
          <artifactId>maven-release-plugin</artifactId>
          <version>2.4.1</version>
          <configuration>
            <useReleaseProfile>false</useReleaseProfile>
            <releaseProfiles>release</releaseProfiles>
            <autoVersionSubmodules>true</autoVersionSubmodules>
          </configuration>
        </plugin>
      </plugins>
    </build>
    

    To deploy source and Javadoc JARs for releases, use the following release profile:

    <profiles>
      <profile>
        <id>release</id>
        <build>
          <plugins>
            <plugin>
              <artifactId>maven-source-plugin</artifactId>
              <executions>
                <execution>
                  <id>attach-sources</id>
                  <goals>
                    <goal>jar</goal>
                  </goals>
                </execution>
              </executions>
            </plugin>
            <plugin>
              <artifactId>maven-javadoc-plugin</artifactId>
              <executions>
                <execution>
                  <id>attach-javadocs</id>
                  <goals>
                    <goal>jar</goal>
                  </goals>
                </execution>
              </executions>
            </plugin>
          </plugins>
        </build>
      </profile>
    </profiles>
    

    You should validate the release profile by executing it as follows:

    mvn -Prelease clean install
    

Performing a release

  1. Go the the Bintray Web site and add a new version for the package.

  2. Execute the release process as follows:

    mvn release:prepare
    mvn release:perform
    
  3. Go to the Bintray Web site and publish the new version.

Using the project as dependency

To use your project as a dependency in another project, add the following repository declaration:

<repositories>
  <repository>
    <id>bintray</id>
    <url>http://dl.bintray.com/bintray-user/bintray-repo</url>
    <releases>
      <enabled>true</enabled>
    </releases>
    <snapshots>
      <enabled>false</enabled>
    </snapshots>
  </repository>
</repositories>

Thursday, May 23, 2013

Customizing the proxy.log file of a WebSphere Proxy Server or ODR

In a previous post I discussed how the HTTP access logs on a WebSphere application server can be customized. In this post I will analyze how to do that with the access logs of a WebSphere 8.5 proxy server or on-demand router (ODR).

The access logs for a proxy sever (resp. ODR) can be configured under "Proxy settings" (resp. "On Demand Router settings") in the admin console. They are enabled by default, and the location of the log file is ${SERVER_LOG_ROOT}/proxy.log. The file uses the standard NCSA format. That means that it misses two important pieces of information that one usually wants to have for an HTTP proxy server:

  • The response time.
  • The address (and port number) of the application server that handled the request.

With IBM HTTP Server, one would add that information to the access logs using the %D and %{WAS}e format strings in the LogFormat directive. The question is whether this can also be done with ODR.

It turns out that the format of the proxy.log file is not customizable. In fact, entries are written to the log file by an instance of com.ibm.ws.proxy.log.http.HttpProxyLogImpl and that instance is managed by com.ibm.ws.proxy.log.http.HttpLogFilter. One can see from the package name that these classes are specific to the HTTP proxy code in WebSphere; although the output format is similar, they don't reuse the code responsible for writing the HTTP access logs in WebSphere application server (i.e. com.ibm.ws.http.logging.impl.AccessLogger). That means that the access log customization feature introduced by PM46717 doesn't apply to proxy.log.

On the other hand, on a proxy server or ODR it is still possible to enable the standard access logs on the HTTP channel used by the proxy transport. In the admin console, these settings can be found under "Proxy server transports" or "On Demand Router transports" (and then HTTP_PROXY_CHAIN or HTTPS_PROXY_CHAIN depending on the protocol that is used). The format of that access log can then be specified using the accessLogFormat custom property. Unfortunately that doesn't give the expected result:

  • The response time values produced by %D are incorrect for clients that use persistent HTTP connections. As noted in my previous post, this is already broken in WAS, and it is not so surprising that it doesn't work in proxy server and ODR either. What is more surprising is that the feature is not broken in the same way: for a connection with keep-alive, the proxy server or ODR reports the time elapsed between the transmission of the previous response and the transmission of the current response, while WAS reports the time elapsed between the establishment of the persistent connection and the transmission of the current response.
  • %h no longer outputs the client IP address but the address of the application server that has handled the request. On the other hand, %a and %A both output the address of the host where the proxy server or ODR is running. This means that it is no longer possible to log the IP address of the client. In addition, there is no way to log the port number of the application server that handled the request.

Unfortunately the conclusion is that (as of version 8.5.0.2) it is not possible to configure a proxy server or ODR to produce custom access logs that include the response time and the address/port of the target application server. At least, that can't be achieved using out-of-the-box features available in WebSphere. There is however another option, namely to create a custom log filter as described in the WebSphere infocenter (Creating a custom filter and deploying it to a proxy server). That's a bit of an overkill if the goal is simply to customize the access logs, but it would also offer the interesting option to send the access logs directly to a centralized log management system such as logstash.

Wednesday, May 22, 2013

Customizing WebSphere access logs to include the response time

WebSphere Application Server always had the option to enable HTTP access logging. In the admin console, this feature is called "NCSA access logging" because the log format is compatible with the old NCSA HTTPd. Quite naturally, being based on the original NCSA code, Apache HTTPD also supports the same log format. In addition, Apache HTTPD allows to customize the access log. Until recently, this was not the case in WebSphere and the administrator only had the choice between two standard log formats: "Common" and "Combined".

Customized access logs were introduced by the APARs PM46717 and PM68250 in WAS 7.0.0.25 and 8.0.0.2. They are enabled by a custom property called accessLogFormat that needs to be configured on the HTTP channel. As noted in PM46717 this will only work if the "Use chain-specific logging" option is enabled on the HTTP Channel where the previously mentioned property is enabled. The accessLogFormat property uses a syntax similar to the LogFormat directive in HTTPD. In particular, the %D format string can be used to add the response time to the logs. However, there are a couple of issues with that:

  • As described in the previously mentioned APARs, the time is measured with millisecond accuracy [and] microsecond precision. The last three digits are always 0. That is a bit surprising considering that Java has a method that gives access to a timer with sub-millisecond resolution. The reason may be that in some Java versions that method relies on a per-CPU timer and using it to calculate time differences on a multi-processor system may yield incorrect results. Fortunately, millisecond accuracy should be enough in most cases.
  • According to APAR PM81277, the %D format string was not correctly implemented before WAS 7.0.0.29.
  • On WAS 8.5.0.1 and 8.5.0.2 (other versions have not been tested but are likely affected too) %D actually doesn't represent the time elapsed between the reception of the request and the transmission of the response, but the time elapsed between the establishment of the HTTP connection and the transmission of the response. This is fine if the client uses a new connection for each request, but it obviously makes the feature totally useless with clients that use persistent HTTP connections.

Finally, it is important to note that for obvious reasons the elapsed time measured on the server side always differs from the actual response time seen by the client. If one assumes that network delays are negligible, then one can expect that both are approximately equal. However, there is one case where there can be a large discrepancy between the value reported by WebSphere and the actual response time, namely if the Web container thread pool becomes saturated. The reason is that the HTTP transport in WebSphere uses a single thread pool to process asynchronous I/O events and to let applications handle HTTP requests. That means that if all available threads are blocked in application code (e.g. waiting for the execution of data base requests), then this will also prevent WebSphere from processing I/O events. These I/O events will then be queued in the operating system until more threads become available again. That effectively protects the server from overload, but it introduces a bias into the response time measurement because the timer only starts when WebSphere begins processing the request.

That problem doesn't occur with the access logs produced by a Web server (such as IBM HTTP Server with the WebSphere plug-in) in front of WAS. Therefore, in that kind of setup it is always better to rely on the response time measured by the Web server (which will be much closer to the actual response time seen by the client) and not to use the WAS access logs as a substitute for the Web server access logs.

Monday, May 20, 2013

Syrie: "Il n'y a plus de rebelles modérés"

Ceci est la traduction (partielle) d'un article intitulé "Es gibt keine gemäßigten Rebellen mehr" publié par Jürgen Todenhöfer le 3 mai dans le journal allemand Frankfurter Allgemeine Zeitung. De 1972 à 1990, Todenhöfer a été député au parlement allemand pour la CDU (le parti d'Angela Merkel). Jusqu'en 2008 il a été membre du conseil d'administration du groupe de médias allemand "Burda". Considéré comme membre de l'aile droite de la CDU, il a été un fervent supporteur de la politique américaine de soutien aux moudjahidine en Afghanistan. A cette époque il a effectué plusieurs voyages dans les zones de combat contrôlées par les moudjahidine. Par contre, depuis 2001, Todenhöfer est connu comme opposant aux interventions des Etats-Unis en Afghanistan et en Iraq. Il a également visité ces zones de conflit, ainsi que plusieurs pays du "Printemps arabe".

C'est le printemps à Damas. Devant moi, dans un logement pauvre de Damas, est assis Salem, un commandant rebelle de 27 ans. Il appartient à l'"Armée Syrienne Libre" (ASL) qui est considérée en Occident comme modérée. [...] Il dit:

"Nous nous battons jusqu'à ce que le régime tombe. S'il faut, encore pendant des années, même si la moitié des syriens mourront. Nous voulons un califat islamique tel qu'en Arabie Saoudite, mais pas seulement pour les riches, aussi pour les pauvres. Ce n'est pas pour votre démocratie que nous mourons."

Salem rapporte que beaucoup de ses combattants passent au Jabhat al-Nosra, la filiale d'Al-Qaïda. Selon lui, avec plus que 300 dollars par mois, ce groupe paie ses combattants le double. Al-Nosra aurait aussi de meilleures armes. Comme "avant-garde de la révolution" al-Nosra peut choisir les meilleures armes parmi celles livrées par l'étranger. Cela ne dérange pas Salem. Selon lui, ils se battent pour la même théocratie, mais Al-Qaïda poursuit des objectifs à l'échelle globale, alors que l'ASL n'a que des visées nationales. Que l'"opposition en exil" prenne parfois prudemment ses distances avec al-Nosra est sans importance. Selon Salem, elle ne joue aucun rôle à l'intérieur de la Syrie.

Dans une prison je rencontre trois combattants présumés d'Al-Qaïda: un iranien, un turc et un palestinien. Seul le trentenaire d'Haïfa qui a vécu longtemps comme réfugié en Syrie se revendique ouvertement d'Al-Qaïda. [...]

Selon lui, après la Syrie, Al-Qaïda attaquera l'Europe et les Etats-Unis. Ses yeux brillent à l'idée "de se faire exploser à New York". Que cela cause la mort d'innocents ne l'intéresse pas. Que le coran interdise le meurtre d'innocents non plus. Comme le rebelle Salem, il considère que "l'établissement d'un califat est plus important que la vie de 10 millions de syriens". [...]

A deux reprises, je parle avec le président Assad. [...] Je l'interroge sur ses objectifs. Il cite "la libération de la Syrie d'Al-Qaïda" et le "rétablissement d'une société séculière à l'intérieur de laquelle toutes les religions ont une place"; musulmans, chrétiens, tous. Il se dit prêt à des concessions importantes pour permettre une paix équitable. [...]

C'est mon sixième voyage en Syrie depuis le début des troubles. J'ai discuté longtemps avec des représentants de l'opposition et des heures avec le président syrien. Ce qui a commencé comme un soulèvement démocratique légitime a dégénéré en un mélange de conflit sectaire fanatique et une guerre par procuration contre l'Iran. [...]

La majorité silencieuse en Syrie regarde cette guerre avec horreur et stupéfaction. J'interroge un médecin qui se situe quelque part entre les fronts sur le rôle d'Assad. Il répond que "surtout au début, Assad a commis des erreurs. Mais il n'est pas Staline, plutôt Poutine".

En coulisses, l'Arabie Saoudite et le Qatar tirent les ficelles. Ils ont permis à Al-Qaïda un triomphe phénoménal. J'ai vu Al-Qaïda en Afghanistan, au Pakistan et en Iraq. Comparé à Al-Nosra, c'étaient des organisations insignifiantes. La véritable ascension d'Al-Qaïda a eu lieu en Syrie. Avec chaque jour qui passe dans cette guerre, Al-Qaïda devient plus puissant, plus attrayant et même plus respecté. Les chaînes Al Jazeera et Al-Arabiya couvrent quotidiennement leurs "exploits". Peter Bergen, un expert américain reconnu, appelle al-Nosra le groupe combattant "le plus efficace et le plus discipliné" contre Assad et capable un jour d'"attaquer l'Occident".

Les livraisons d'armes et l'aide financière venant de l'Arabie Saoudite et du Qatar et approuvé par Washington ainsi que le soutien occidental aux rebelles attise le conflit syrien. Cela a l'effet d'un plan Marshall pour Al-Qaïda. De facto, les apprentis sorciers américains font le jeu d'Al-Qaïda. Certes, officiellement ils continuent à miser sur les "rebelles modérés". Mais ceux-là n'existent plus. La politique américaine au Moyen-Orient n'a été rarement plus myope et dangereuse.

Thursday, May 9, 2013

Using styles in Microsoft Visio

If you want to use Visio to create complex diagrams with a consistent look, you'll definitely want to use styles. Visio supports this, but the feature is hidden in Visio 2007. The following Microsoft support document explains how to enable access to the menu items that allow you to define and apply styles:

What happened to styles?

In addition to the Style and Define Styles commands in on the Format menu mentioned in the document, you may also want to check out the Drawing Explorer Window command in the View menu. The latter gives you an overview of the defined styles and their relationships.

Monday, May 6, 2013

Interpreting JAX-WS client connection errors on WebSphere

The present article describes the exceptions thrown by the WebSphere runtime for various connection related error conditions that may occur during a Web service call performed using a JAX-WS client. This information is useful in determining the root cause of a connection problem when only the exception message in the WebSphere logs is available.

Errors that occur during the establishment of the TCP connection trigger exceptions with informative error messages (such as Connection refused) and are easy to debug. Things are less trivial for exceptions caused by timeouts and premature connection closures, and we will concentrate on this type of error conditions here.

Note that the scenarios described in this article were tested on a WAS 7.0 server running on Linux. Since WebSphere's HTTP transport is based on a proprietary native asynchronous I/O library (and not the Java NIO framework), the error messages may depend on the operating system. Also note that the information in this article doesn't apply to the JAX-WS thin client (com.ibm.jaxws.thinclient_7.0.0.jar) used in standalone applications. The reason is that the JAX-WS runtime only uses the native async I/O library inside the application server, but falls back to Java NIO when running in a standalone application.

Client read timeout

The first case to consider is a client read timeout. This occurs if the server doesn't reply within the configured timeout. In this case, the WebSphere channel framework internally generates the following exception:

com.ibm.io.async.AsyncTimeoutException(Async operation timed out, [Timeout, rc=0])

This exception will be wrapped in another exception:

java.net.SocketTimeoutException: Async operation timed out

This means that client read timeouts are easy to recognize.

Premature connection termination on server side

Another case to consider is the premature termination of the connection on the server side. There are various reasons why this may occur: failure of the target server, failure of a proxy/gateway between the client and the server, a proxy/gateway that forcefully closes the connection because of a timeout, etc.

If an HTTP client connection is closed by the server during a request (i.e. unless the connection is idle), the channel framework internally generates one of the following exceptions:

java.io.IOException: Async IO operation failed (1), reason: RC: 107 Transport endpoint is not connected
java.io.IOException: Async IO operation failed (3), reason: RC: 107 Transport endpoint is not connected

This exception is generated regardless of whether the connection closure is expected or not (as can be seen by activating traces for com.ibm.io.async.AbstractAsyncFuture). According to the HTTP protocol, there is a single case where the server side is expected to close the connection during a request, namely an HTTP response that has no Content-Length header and that is not chunked. In that case, once the HTTP status line and headers have been sent, connection closure is used to signal the end of the response body. Since the connection closure is expected, the exception generated by the channel framework is obviously not thrown to the application. If the connection closure by the server side was actually premature, then the response body will appear truncated (or empty) to the client. This will result in an XML parser error, or in the following exception if the response is completely empty:

org.apache.axis2.AxisFault: The input stream for an incoming message is null.

In all other cases, connection closure by the server side is not expected, and the exception generated by the channel framework is thrown to the application. This includes the following error conditions:

  • The connection is closed before sending the HTTP response code.

  • The HTTP response has a Content-Length header, but the server closes the connection before the specified number of bytes have been sent.

  • The HTTP response is chunked and the server closes the connection in the middle of a chunk or before the final (empty) chunk has been sent.

One important thing to know about the java.io.IOException: Async IO operation failed exception is that it is created in a very peculiar way. If one looks closely at the stack trace, one can see that the IOException object is actually taken from a cache:

java.io.IOException: Async IO operation failed (3), reason: RC: 107  Transport endpoint is not connected
        at com.ibm.io.async.AsyncLibrary$IOExceptionCache.(AsyncLibrary.java:891)
        at com.ibm.io.async.AsyncLibrary$IOExceptionCache.get(AsyncLibrary.java:904)
        at com.ibm.io.async.AsyncLibrary.getIOException(AsyncLibrary.java:918)

The likely reason for this unusual (to say the least...) design is that, as noted above, the exception is generated internally even if the connection closure is actually expected. The cache is therefore required to avoid the large overhead associated with instantiating the exception in those cases where it is not thrown to the application (and where the service invocation actually completes successfully). The issue with that strategy is that the stack trace associated with the exception is not meaningful and has nothing to do with the location where the error actually occurred. It should be completely ignored when debugging java.io.IOException: Async IO operation failed errors.

As shown above, for connection termination the reason code appearing in the error message is always 107, but the code appearing in parentheses just after the Async IO operation failed part may differ from case to case. A value of 1 indicates that the socket was already closed before the read request was processed by the channel framework. This would e.g. occur if the server closes the connection immediately. A value of 3 indicates that the server closed the connection while the read request was pending. This typically occurs if the connection is closed because of a timeout.

Termination of idle persistent connections

To avoid the overhead of creating a new TCP connection for each HTTP request, HTTP 1.1 introduces the concept of persistent connections. According to section 8.1.4 of the HTTP 1.1 specification,

A client, server, or proxy MAY close the transport connection at any time. For example, a client might have started to send a new request at the same time that the server has decided to close the "idle" connection. From the server's point of view, the connection is being closed while it was idle, but from the client's point of view, a request is in progress.

The existence of this race condition is a well known problem with the HTTP protocol and was already identified in early proposals for persistent connections (See section 5.1 of https://www.hpl.hp.com/techreports/Compaq-DEC/WRL-95-4.pdf).

It should be noted that section 8.1.2.1 of the HTTP 1.1 specification also requires that

If the server chooses to close the connection immediately after sending the response, it SHOULD send a Connection header including the connection-token close.

Normally, an HTTP server only closes a persistent connection after it has been idle for a certain time or if a certain number of requests have been received on that connection. As shown above, in the latter case, the server is expected to use the Connection: close header to inform the client (since it knows in advance that it will close the connection). That means that in practice, with a well behaved HTTP server, the race condition will only occur if the keep-alive timeout configured on the client side is higher than the keep-alive timeout configured on the server side.

For JAX-WS clients on WebSphere, the default keep-alive timeout is 5 seconds, which is smaller than the keep-alive timeout usually seen on HTTP servers. Therefore with a well behaved HTTP server, the race condition should never occur. If the HTTP server is not so well behaved and closes the persistent connection too quickly (or closes the connection immediately after sending a response without a Connection: close header), then the JAX-WS client may trigger the following error:

java.io.IOException: Async IO operation failed (1), reason: RC: 107  Transport endpoint is not connected

This is consistent with the observations made in the previous section, because the connection closure is initiated by the server side and the socket is already closed when the read request for the response is processed by the channel framework (which explains the code 1).

Section 8.1.4 of the HTTP 1.1 specification describes one possible way to handle this problem:

[C]lients [...] MUST be able to recover from asynchronous close events. Client software SHOULD reopen the transport connection and retransmit the [request] so long as [it] is idempotent [...]. Non-idempotent [requests] MUST NOT be automatically retried.

This strategy is not generally usable because SOAP requests cannot be assumed to be idempotent.

Another strategy is to use the 100 (Continue) status as described in section 8.2.3: the client sends an Expect: 100-continue header and then waits for the intermediate 100 (Continue) response before sending the request body. This allows the client to detect a broken connection before sending the SOAP request. The drawback of this strategy is that it requires an additional network roundtrip, which to some extend defeats the purpose of using persistent connections.

WebSphere supports both mechanisms, but they are not enabled by default (for obvious reasons: they should not be required with well behaved HTTP servers and they both are problematic). It is possible to enable one of these strategies on specific JAX-WS clients by configuring a HTTP transport policy.

Unfortunately, as of WebSphere 7.0.0.27 (and 8.5.0.2) none of these settings actually allow the JAX-WS runtime to recover from a premature termination of an idle connection. If Send 'Expect 100-continue' request header is enabled in the policy, then the following error occurs if the JAX-WS runtime attempts to send a request at the same time the server closes the idle connection:

java.io.IOException: Request not finished yet

With Resend enabled, another exception is triggered:

java.lang.RuntimeException: Invalid call to WsByteBuffer method.  Buffer has already been released.

This means that WebSphere's JAX-WS runtime is not able to handle the premature termination of an idle HTTP connection at all, and the only workaround is to disable persistent connections.

Saturday, May 4, 2013

Maven patterns: skipping site generation, but not staging

In a multi-module Maven build, it is sometimes necessary to create a module that contributes content to the Maven site, but that doesn't have any content that needs to be rendered using maven-site-plugin. That is, the module doesn't execute any reports and doesn't contain any XDoc or APT documents. Instead, it simply executes some plugin(s) in the site phase to generate the content. Here are two scenarios where this is useful:

  • A module that contains documentation written in Docbook that should be published as HTML to the Maven site and also rendered as PDF for inclusion in a distribution. In this scenario, one would configure two executions of the docbkx-maven-plugin: one for the generate-html goal during the site phase, and one for the generate-pdf goal during the compile phase. One would then use build-helper-maven-plugin to attach the generated PDF file to the project so that it will be available later for inclusion in the distribution.

  • In a multi-module build, one generally wants to produce a single Javadoc collection for all the code in the project. This is typically done by configuring the javadoc:aggregate goal/report in the root POM. However, since that goal aggregates sources from all modules (excluding test sources), in large projects this often results in the inclusion of unwanted classes, e.g. utility classes that are used by the build process itself or by test code. An effective strategy to avoid this is to have a dedicated Maven module for the Javadoc generation and to use Maven dependencies (together with the includeDependencySources and dependencySourceIncludes parameters) to control which modules should be included in the Javadoc.

It is important to understand that although the site generation (i.e. the execution of the site:site goal) for these modules should be skipped, the site staging must not be skipped. Otherwise, mvn site:stage would fail to assemble the site.

With version 3.0 of the maven-site-plugin, it was enough to set the skip parameter to true to achieve this:

<plugin>
  <artifactId>maven-site-plugin</artifactId>
  <configuration>
    <skip>true</skip>
  </configuration>
</plugin>

This would indeed skip the site goal, but not the stage goal.

It turns out that this is actually a bug. In fact, the documentation for the skip parameter reads:

Set this to 'true' to skip site generation and staging.

In version 3.2, the skip parameter behaves as documented and the configuration shown above would not only skip site generation, but also staging.

Instead of setting skip to true, one may try to set generateReports to false. This would indeed prevent maven-site-plugin from generating any reports. Since in addition the module is not expected to have a src/site folder, the site goal would generate no content. However, maven-site-plugin would still copy files from the site skin (such as CSS files and images) to the output directory, and this is undesirable.

The correct solution is to override the default execution of the site goal and to apply the skip parameter only to that execution:

<plugin>
  <artifactId>maven-site-plugin</artifactId>
  <executions>
    <execution>
      <id>default-site</id>
      <phase>site</phase>
      <goals>
        <goal>site</goal>
      </goals>
      <configuration>
        <skip>true</skip>
      </configuration>
    </execution>
  </executions>
</plugin>