Focusing Your Efforts - xspec/xspec GitHub Wiki

XSpec descriptions can get quite large, which can mean that running the tests takes some time. There are three ways of dealing with this.

Importing Other XSpec Documents

First, you can import other XSpec description documents into your main one using x:import. The href attribute holds the location of the imported document. All the scenarios from the referenced document are imported into this one, and will be run when you execute it. For example:

<x:import href="other_xspec.xml" />

It helps if the imported XSpec description documents can stand alone; this enables you to perform a subset of the tests. To work effectively, you'll want the imported XSpec description documents to cover the same stylesheet as the main one, or a stylesheet module that's included or imported into that stylesheet.

When deciding how to organize your test scenarios among main and imported XSpec description documents, consider these rules about how the documents interact with each other:

  • For each global XSLT parameter or variable you want to override using /x:description/x:param or global XSpec variable you want to define using /x:description/x:variable, do so at most once among all imported XSpec description documents and the main one. All the /x:description/x:param/@name and /x:description/x:variable/@name values in all the documents must be distinct. Tip: The External Transformation feature enables you to override global XSLT parameters differently per XSpec description document, via scenario-level parameters (//x:scenario/x:param).
  • When using attributes of x:description to vary behavior, specify your desired attribute value at the correct level among the imported XSpec description documents or the main one. The next bullets indicate where to specify attributes so they take effect when you run tests from the main XSpec description document.
    • Specify in main document:
      • measure-time
      • result-file-threshold
      • run-as
      • threads
    • Specify in individual imported documents:
      • expand-text
      • preserve-space

Marking Scenario or Expectation as "pending"

Second, you can mark any scenario or expectation as "pending" by wrapping them within a x:pending element or adding a pending attribute to the x:scenario element. When the tests are run, any pending scenarios or expectations aren't tested (though they still appear, greyed out, in the test report). Exception: The focus attribute, described in the next section, takes precedence over x:pending and pending.

The x:pending element can have a label attribute to describe why the particular description is pending; for example it might hold "TODO". If you use the pending attribute, its value should give the reason the tests are pending. For example:

<x:pending label="no support for block elements yet">
   <x:scenario label="when processing a para element">
      <x:context>
         <para>...</para>
      </x:context>
      <x:expect label="it should produce a p element">
         <p>...</p>
      </x:expect>
   </x:scenario>
</x:pending>

or:

<x:scenario pending="no support for block elements yet" label="when processing a para element">
   <x:context>
      <para>...</para>
   </x:context>
   <x:expect label="it should produce a p element">
      <p>...</p>
   </x:expect>
</x:scenario>

Marking Scenario as Having the Current "focus"

Third, you can mark any scenario as having the current "focus" by adding a focus attribute to a <x:scenario> element. Effectively, this marks every other scenario as "pending", with the label given as the value of the focus attribute. For example:

<x:scenario focus="getting capitalisation working" label="when capitalising a string">
   <x:call function="eg:capital-case">
      <x:param select="'an example string'" />
      <x:param select="true()" />
   </x:call>
   <x:expect label="it should capitalise every word in the string" select="'An Example String'" />
</x:scenario>

Using focus is a good way of working through one particular scenario, but once your code is working with that scenario, you should always test all the others again, just in case you've broken something else.

⚠️ **GitHub.com Fallback** ⚠️