10 minutes to read
docToolchain Manual
1. Install docToolchain
1.1. Installation Overview
docToolchain is composed of two parts:
-
doctoolchain
which is the toolchain used to create your documentation -
the docToolchain shell wrapper script installed in your project which calls the toolchain
The use of this setup has the following advantages:
-
It’s easy to build your documentation within your project folder.
-
Ensures that everyone in the project uses the same docToolchain version.
-
Keeps all docToolchain technology out of your project repository.
-
Facilitates the installation of the docToolchain if not installed.
-
Makes it easier to upgrade to never versions of docToolchain.
1.2. Install dtcw
in your project directory
The docToolchain wrapper script dtcw
, respective dtcw.ps1
or dtcw.bat
for MS Windows, is meant to be installed in your project root directory.
The wrapper script simplifies calls to the docToolchain.
Even if you are going to use docToolchain in multiple projects, the toolchain will only be installed once on your system. |
If you have an Apple silicon (M1/M2) Mac, make sure that you have docker up and running and type the following commands in the Terminal:
`arch -x86_64 /bin/bash`
Now, download dtcw
into your project directory and make the script executable with the following commands:
cd <your project>
curl -Lo dtcw https://doctoolchain.org/dtcw
chmod +x dtcw
If you don’t have curl
installed, you can also use wget
:
cd <your project>
wget doctoolchain.org/dtcw
chmod +x dtcw
cd <your project>
Invoke-WebRequest doctoolchain.org/dtcw.ps1 -Outfile dtcw.ps1
Got an error message that you are not allowed to execute powershell scripts?
Try to switch to an unrestricted powershell by executing powershell.exe -ExecutionPolicy Unrestricted .
|
cd <your project>
curl -Lo dtcw.bat doctoolchain.org/dtcw.bat
dtcw.bat wraps the dtcw.ps1 script and executes it in powershell. This might be easier to use if you haven’t yet configured your powershell as a developer.
|
In case your development team uses different operating systems, put the wrapper scripts for the desired operating systems (dtcw , dtcw.ps1 , and dtcw.bat ) into your project.
|
Once the docToolchain wrapper is installed in your project directory you have to decide how to install the toolchain:
-
Run docToolchain in a container with the docToolchain container image.
-
Install docToolchain with
dtcw
in the the users home directory$HOME/.doctoolchain
-
Install docToolchain with SDKMAN! a tool for managing parallel versions of multiple Software Development Kits.
docToolchain depends on Java 11 (Java 11, 14, and 17 are also supported)
If you don’t use the docToolchain container image you have to install Java on your system.
In case you have Java already installed, make sure You may use |
1.3. Run docToolchain in a container
The docToolchain project provides a container image of approximatley 900 MB from the Docker Hub container registry. The Dockerfile from which the image is created may be found at https://github.com/docToolchain/docker-image.
To to run docToolchain in a container you need an installed container engine. The best known container engine is Docker.
If the container engine is installed you can Run your First Command. The docToolchain wrapper script in your project directory will detect the container engine and pull the docToolchain image on the first invocation.
1.4. Install docToolchain with dtcw
To install docToolchain in $HOME/.doctoolchain
execute the following command.
./dtcw install doctoolchain
In case you have no Java installed you may use dtcw
to install Java in a sub-directory of $HOME/.doctoolchain
.
./dtcw install java
Unable to locate Java Runtime - check yout Bash environment
If If you use |
In case you have no Java installed you can use dtcw.ps1
to install Java:
.\dtcw.ps1 install java
dtcw.bat install java
If the docToolchain installation finished successfully, you are ready to Run your First Command.
1.5. Install docToolchain with SDKMAN!
TODO: description how to install docToolchain with SDKMAN!.
1.6. Run your First Command
Call the docToolchain wrapper with tasks --group doctoolchain
to show all tasks provided by docToolchain.
Those tasks may be used when invoking the docToolchain wrapper script.
The first time docToolchain is called, it downloads all necessary dependencies. Therefore the execution of the command may take some time. Subsequent calls to docToolchain will be faster. |
./dtcw tasks --group=doctoolchain
dtcw 0.50 - 8061694f
docToolchain 2.3.0
Available docToolchain environments: local (1)
Environments with docToolchain [2.3.0]: local (2)
Using environment: local (3)
Using Java 17.0.6 [/home/john_doe/.doctoolchain/jdk/bin/java] (4)
Downloading https://services.gradle.org/distributions/gradle-7.5.1-bin.zip (5)
..........10%..........20%..........30%...........40%..........50%..........60%..........70%...........80%..........90%..........100%
Welcome to Gradle 7.5.1!
Here are the highlights of this release:
- Support for Java 18
- Support for building with Groovy 4
- Much more responsive continuous builds
- Improved diagnostics for dependency resolution
For more details see https://docs.gradle.org/7.5.1/release-notes.html
To honour the JVM settings for this build a single-use Daemon process will be forked. See https://docs.gradle.org/7.5.1/userguide/gradle_daemon.html#sec:disabling_the_daemon.
Daemon will be stopped at the end of the build
> Configure project :
Config file '/code/docToolchainConfig.groovy' does not exist' (6)
[ant:input]
[ant:input] do you want me to create a default one for you? (y, n)
y
1 | List of available docToolchain environments. The output may vary depending on your system. In our example only the local environment is available since neither sdk nor docker was found. |
2 | Environments in which docToolchain s available. The output may vary depending on how you installed docToolchain. In our example docToolchain was found in the user’s local environment in $HOME/.doctoolchain . |
3 | Shows the used docToolchain environment. In case docToolchain is installed in more than one environment the wrapper script picks the environment in the following order: local , sdk , and then docker . |
4 | Location of the used Java. In our example Java was installed in the local environment with the docToolchain wrapper script. |
5 | docToolchain was invoked the first time, thus it is downloading its dependencies. |
6 | The docToolchain configuration file docToolchainConfig.groovy wasn’t found in the project repository. docToolchain asks if it should create a new one. |
.\dtcw.ps1 tasks --group=doctoolchain
dtcw.bat tasks --group=doctoolchain
If you are behind a corporate proxy, you might need to consider build-script dependencies are fetched from a repository referenced by the property mavenRepository .
By default the value https://plugins.gradle.org/m2/ is used. When a repository requiring credentials is used the properties mavenUsername and mavenPassword can be set as well.
|
DTC_OPTS="-PmavenRepository=your_maven_repo -PmavenUsername=your_username -PmavenPassword=your_pw" ./dtcw tasks --group=doctoolchain --info
1.7. Configure docToolchain to Use Existing Documents
If your project already has documents in AsciiDoc format, you’ll need to tell docToolchain where to find them.
To do so, take a look at the created docToolchainConfig.groovy
and update it.
1.8. Create a New Documentation Project from Scratch with Arc42
If you want to use the arc42 template in your project, you can get it in AsciiDoc format by using the following commands.
./dtcw downloadTemplate
.\dtcw.ps1 downloadTemplate
dtcw.bat downloadTemplate
1.9. Generate HTML and PDF
By now, the docToolchain wrapper dtcw
should be in your project folder along with the arc42 template.
Now Let’s render arc42 to HTML and PDF. To do so, run the commands below:
./dtcw generateHTML
./dtcw generatePDF
.\dtcw.ps1 generateHTML
.\dtcw.ps1 generatePDF
As a result, you will see the progress of your build together with some warnings which you can just ignore for the moment.
The first build generated some files within the build
:
build
|-- html5
| |-- arc42
| | `-- arc42.html
| `-- images
| |-- 05_building_blocks-EN.png
| |-- 08-Crosscutting-Concepts-Structure-EN.png
| `-- arc42-logo.png
`-- pdf
|-- arc42
| `-- arc42.pdf
`-- images
|-- 05_building_blocks-EN.png
|-- 08-Crosscutting-Concepts-Structure-EN.png
`-- arc42-logo.png
Congratulations! If you see the same folder structure, you’ve just rendered the standard arc42 template as HTML and PDF!
Please raise an issue on github if you didn’t get the right output.
Blog-Posts: Behind the great Firewall, Enterprise AsciiDoctor |
1.10. Upgrading to a New docToolchain Release
If there is a new docToolchain release you wish to use, do the following:
-
Open the docToolchain wrapper script (
dtcw
, respectivedtcw.ps1
anddtcw.bat
) in your favourite text editor and look for the line withDTC_VERSION
which should be localted near the start of the file:
# See https://github.com/docToolchain/docToolchain/releases for available versions. # Set DTC_VERSION to "latest" to get the latest, yet unreleased docToolchain version. VERSION=2.1.0
-
Change it to match the desired release.
-
In case you want to install docToolchain in local user enviroment install the new docToolchain release with the following command:
./dtcw install doctoolchain
-
If you want to test a not-yet-released feature, you can set the
DTC_VERSION
tolatest
anddtcw
willclone
orpull
the current default branch of the project. Please note this only works with a local copy, not with a Docker install. -
If you want to develop new features for docToolchain, you can also use
latestdev
as version. In this case,dtcw
will try to clone the docToolchain repository with the ssh-git protocol to a fork in$HOME/.doctoolchain/docToolchain-latest
.
latest and latestdev currently only work with the bash version of the wrapper.
|
2. Using docToolchain to Build Docs
1 minute to read
docToolchain implements many features via scripts, which you call through the command line. These scripts are called tasks
in this documentation.
Learn more about these scripts in the Tasks menu.
3. autobuildSite
1 minute to read
3.1. About This Task
This script starts an endless loop which checks for changes to your docs source then re-runs the generateSite
-task whenever it detects changes.
The output will be logged to build/generateSite.log
.
3.2. Source
#!/bin/bash
DIR_TO_WATCH='src/'
#COMMAND='rm -r build || true && mkdir -p build/microsite/output/images/ && ./dtcw generateSite 2>&1 | tee build/generateSite.log'
COMMAND='mkdir -p build/microsite/output/images/ && ./dtcw generateSite 2>&1 | tee build/generateSite.log'
#execute first time
cp src/docs/images/ready.png build/microsite/output/images/status.png
#eval $COMMAND
#wait for changes and execute
while true ; do
watch --no-title --chgexit "ls -lR ${DIR_TO_WATCH} | sha1sum"
cp src/docs/images/building.png build/microsite/output/images/status.png
eval "$COMMAND"
cp src/docs/images/ready.png build/microsite/output/images/status.png
sleep 6
done
3.3. generateHTML
3 minutes to read
About This Task
This is the standard Asciidoctor generator which is supported out of the box.
The result is written to build/html5
(the HTML files need the images folder to be in the same directory to display correctly).
Generating Single-File HTML Output
If you would like the generator to produce a single-file HTML, you can configure Asciidoctor to store the images inline as data-uri
by setting :data-uri:
in the config of your AsciiDoc file.
But be warned. The file can quickly become very large and some browsers might struggle to render it.
Creating Text-Based Diagrams
docToolchain is configured to use the asciidoctor-diagram plugin to create PlantUML diagrams. The plugin also supports many other text-based diagrams, but PlantUML is the most common. To use the plugin, specify your PlantUML code like this:
.example diagram [plantuml, "{plantUMLDir}demoPlantUML", png] (1) ---- class BlockProcessor class DiagramBlock class DitaaBlock class PlantUmlBlock BlockProcessor <|-- DiagramBlock DiagramBlock <|-- DitaaBlock DiagramBlock <|-- PlantUmlBlock ----
1 | The element of this list specifies the diagram tool plantuml to be used.
The second element is the name of the image to be created, and the third specifies the image type. |
{plantUMLDir} ensures that PlantUML also works for the generatePDF task.
Without it, generateHTML works fine, but the PDF will not contain the generated images.
|
Be sure to specify a unique image name for each diagram, otherwise they will overwrite each other. |
The above example renders as:
Controlling Diagram Size
If you want to control the size of the diagram in the output, configure the "width" attribute (in pixels) or the "scale" attribute (floating point ratio) passed to asciidoctor-diagram. The following example updates the diagram above by changing the declaration to one of the versions below:
[plantuml, target="{plantUMLDir}demoPlantUMLWidth", format=png, width=250] # rest of the diagram definition [plantuml, target="{plantUMLDir}demoPlantUMLScale", format=png, scale=0.75] # rest of the diagram definition
The output will render like this:
To work correctly, PlantUML needs Graphviz dot installed.
If you can’t install it, use the Java-based version of the dot library instead.
Just add !pragma layout smetana as the first line of your diagram definition.
|
Further Reading and Resources
-
This blog post explains more about single-file HTML.
-
Read this blog post to understand how to use PlantUML without Graphviz dot.
-
Other helpful posts related to the
generateHTML
task:
Source
task generateHTML (
type: AsciidoctorTask,
group: 'docToolchain',
description: 'use html5 as asciidoc backend') {
attributes (
'plantUMLDir' : file("${docDir}/${config.outputPath}/html5").toURI().relativize(new File("${docDir}/${config.outputPath}/html5/plantUML/").toURI()).getPath(),
)
// specify output folder explicitly to avoid cleaning targetDir from other generated content
outputDir = file(targetDir + '/html5/')
outputOptions {
separateOutputDirs = false
backends = ['html5']
}
def sourceFilesHTML = findSourceFilesByType(['html'])
// onlyIf {
// sourceFilesHTML
// }
sources {
sourceFilesHTML.each {
include it.file
File useFile = new File(srcDir, it.file)
if (!useFile.exists()) {
throw new Exception ("""
The file $useFile in HTML config does not exist!
Please check the configuration 'inputFiles' in $mainConfigFile.""")
}
}
}
resources {
config.imageDirs.each { imageDir ->
from(new File(file(srcDir),imageDir))
logger.info ('imageDir: '+imageDir)
into './images'
}
config.resourceDirs.each { resource ->
from(new File(file(srcDir),resource.source))
logger.info ('resource: '+resource.source)
into resource.target
}
}
doFirst {
if (sourceFilesHTML.size()==0) {
throw new Exception ("""
>> No source files defined for type 'html'.
>> Please specify at least one inputFile in your docToolchainConfig.groovy
""")
}
}
}
Unresolved directive in <stdin> - include::../015_tasks/03_copy_themes.adoc[leveloffset=+2]
3.4. fixEncoding
1 minute to read
About This Task
Whenever Asciidoctor has to process a file that is not UTF-8 encoded, Ruby tries to read it, then throws an error similar to this one:
asciidoctor: FAILED: /home/demo/test.adoc: Failed to load AsciiDoc document - invalid byte sequence in UTF-8
Unfortunately, finding the incorrectly encoded file is difficult if a lot of includes::
are used, and Asciidoctor will only show the name of the main document. This is not Asciidoctor’s fault. The fault lies with the Ruby interpreter that sits underneath.
The fixEncoding task crawls through all *.ad
and *.adoc
files and checks their encoding.
If it comes across a file which is not UTF-8 encoded, it will rewrite it with the UTF-8 encoding.
Source
import groovy.util.*
import static groovy.io.FileType.*
task fixEncoding(
description: 'finds and converts non UTF-8 adoc files to UTF-8',
group: 'docToolchain helper',
) {
doLast {
File sourceFolder = new File("${docDir}/${inputPath}")
println("sourceFolder: " + sourceFolder.canonicalPath)
sourceFolder.traverse(type: FILES) { file ->
if (file.name ==~ '^.*(ad|adoc|asciidoc)$') {
CharsetToolkit toolkit = new CharsetToolkit(file);
// guess the encoding
def guessedCharset = toolkit.getCharset().toString().toUpperCase();
if (guessedCharset!='UTF-8') {
def text = file.text
file.write(text, "utf-8")
println(" converted ${file.name} from '${guessedCharset}' to 'UFT-8'")
}
}
}
}
}
3.5. prependFilename
1 minute to read
About This Task
When Asciidoctor renders a file, the file context only knows the name of the top-level AsciiDoc file. But an include file doesn’t know that it is being included. It simply gets the name of the master file and has no chance to get its own name as an attribute. This task crawls through all AsciiDoc files and prepends the name of the current file like this:
:filename: 015_tasks/03_task_prependFilename.adoc
This way, each file gets its own filename. This enables features like the inclusion of file contributors (see exportContributors-task).
This task skips all files named config.* , _config.* , feedback.* and _feedback.* .
|
Source
import static groovy.io.FileType.*
task prependFilename(
description: 'crawls through all AsciiDoc files and prepends the name of the current file',
group: 'docToolchain helper',
) {
doLast {
File sourceFolder = new File("${docDir}/${inputPath}")
println("sourceFolder: " + sourceFolder.canonicalPath)
sourceFolder.traverse(type: FILES) { file ->
if (file.name ==~ '^.*(ad|adoc|asciidoc)$') {
if (file.name.split('[.]')[0] in ["feedback", "_feedback", "config", "_config"]) {
println "skipped "+file.name
} else {
def text = file.getText('utf-8')
def name = file.canonicalPath - sourceFolder.canonicalPath
name = name.replace("\\", "/").replaceAll("^/", "")
if (text.contains(":filename:")) {
text = text.replaceAll(":filename:.*", ":filename: $name")
println "updated "+name
} else {
text = ":filename: $name\n" + text
println "added "+name
}
file.write(text,'utf-8')
}
}
}
}
}
3.6. collectIncludes
2 minutes to read
About This Task
This task crawls through your entire project looking for AsciiDoc files with a specific name pattern, then creates a single AsciiDoc file which includes only those files.
When you create modular documentation, most includes are static. For example, the arc42-template has 12 chapters and a master template that includes those 12 chapters.
Normally when you work with dynamic modules like ADRs (Architecture Decision Records) you create those files on the fly.
Maybe not within your /src/docs
folder, but alongside the code file for which you wrote the ADR.
In order to include these files in your documentation, you have to add the file with its whole relative path to one of your AsciiDoc files.
This task will handle it for you!
Just stick to this file-naming pattern ^[A-Za-z]{3,}[-_].*
(begin with at least three letters and a dash/underscore) and this task will collect the file and write it to your build folder.
You only have to include this generated file from within your documentation.
If you provide templates for the documents, those templates are skipped if the name matches the pattern ^.\*[-\_][tT]emplate
[-\_].*
.
The Optional Parameter Configurations
You can configure which files are found by the script be setting the paramters in the Config.groovy file.
collectIncludes = [:]
collectIncludes.with {
fileFilter = "adoc" // define which files are considered. default: "ad|adoc|asciidoc"
minPrefixLength = "3" // define what minimum length the prefix. default: "3"
maxPrefixLength = "3" // define what maximum length the prefix. default: ""
separatorChar = "_" // define the allowed separators after prefix. default: "-_"
cleanOutputFolder = true // should the output folder be emptied before generation? default: false
excludeDirectories = [] // define additional directories that should not be traversed.
}
Example
You have a file called:
/src/java/yourCompany/domain/books/ADR-1-whyWeUseTheAISINInsteadOFISBN.adoc
The task will collect this file and write another file called:
/build/docs/_includes/ADR_includes.adoc
…which will look like this:
include::../../../src/java/yourCompany/domain/books/ADR-1-whyWeUseTheAISINInsteadOFISBN.adoc[]
Obviously, you’ll reap the most benefits if the task has several ADR files to collect. 😎
You can then include these files in your main documentation by using a single include:
include::{targetDir}/docs/_includes/ADR_includes.adoc[]
Source
import static groovy.io.FileType.*
import static groovy.io.FileVisitResult.*
import java.security.MessageDigest
task collectIncludes(
description: 'collect all ADRs as includes in one file',
group: 'docToolchain'
) {
doFirst {
boolean cleanOutputFolder = config.collectIncludes.cleanOutputFolder?:false
String outputFolder = targetDir + '/_includes'
if (cleanOutputFolder){
delete fileTree(outputFolder)
}
new File(outputFolder).mkdirs()
}
doLast {
//let's search the whole project for files, not only the docs folder
//exclude typical system folders
final defaultExcludedDirectories = [
'.svn', '.git', '.idea', 'node_modules', '.gradle', 'build', '.github'
]
//running as subproject? set scandir to main project
String scanDir_save = scanDir
if (project.name!=rootProject.name && scanDir=='.') {
scanDir = project(':').projectDir.path
}
if (docDir.startsWith('.')) {
docDir = file(new File(projectDir, docDir).canonicalPath)
}
logger.info "docToolchain> docDir: ${docDir}"
logger.info "docToolchain> scanDir: ${scanDir}"
if (scanDir.startsWith('.')) {
scanDir = file(new File(docDir, scanDir).canonicalPath)
} else {
scanDir = file(new File(scanDir, "").canonicalPath)
}
logger.info "docToolchain> scanDir: ${scanDir}"
logger.info "docToolchain> includeRoot: ${includeRoot}"
if (includeRoot.startsWith('.')) {
includeRoot = file(new File(docDir, includeRoot).canonicalPath)
}
logger.info "docToolchain> includeRoot: ${includeRoot}"
File sourceFolder = scanDir
println "sourceFolder: " + sourceFolder.canonicalPath
def collections = [:]
String fileFilter = config.collectIncludes.fileFilter?:"ad|adoc|asciidoc"
String minPrefixLength = config.collectIncludes.minPrefixLength?:"3"
String maxPrefixLength = config.collectIncludes.maxPrefixLength?:""
String separatorChar = config.collectIncludes.separatorChar?:"-_"
def extraExcludeDirectories = config.collectIncludes.excludeDirectories?:[]
def excludedDirectories = defaultExcludedDirectories + extraExcludeDirectories
String prefixRegEx = "[A-Za-z]{" + minPrefixLength + "," + maxPrefixLength + "}"
String separatorCharRegEx = "[" + separatorChar + "]"
String fileFilterRegEx = "^" + prefixRegEx + separatorCharRegEx + ".*[.](" + fileFilter + ")\$"
logger.info "considering files with this pattern: " + fileFilterRegEx
sourceFolder.traverse(
type: FILES,
preDir : { if (it.name in excludedDirectories) return SKIP_SUBTREE },
excludeNameFilter: excludedDirectories
) { file ->
if (file.name ==~ fileFilterRegEx) {
String typeRegEx = "^(" + prefixRegEx + ")" + separatorCharRegEx + ".*\$"
def type = file.name.replaceAll(typeRegEx,'\$1').toUpperCase()
if (!collections[type]) {
collections[type] = []
}
logger.info "file: " + file.canonicalPath
def fileName = (file.canonicalPath - scanDir.canonicalPath)[1..-1]
if (file.name ==~ '^.*[Tt]emplate.*$') {
logger.info "ignore template file: " + fileName
} else {
String includeFileRegEx = "^.*" + prefixRegEx + "_includes.adoc\$"
if (file.name ==~ includeFileRegEx) {
logger.info "ignore generated _includes files: " + fileName
} else {
if ( fileName.startsWith('docToolchain') || fileName.replace("\\", "/").matches('^.*/docToolchain/.*$')) {
//ignore docToolchain as submodule
} else {
logger.info "include corrected file: " + fileName
collections[type] << fileName
}
}
}
}
}
println "targetFolder: " + (targetDir - docDir)
logger.info "targetDir - includeRoot: " + (targetDir - includeRoot)
def pathDiff = '../' * ((targetDir - docDir)
.replaceAll('^/','')
.replaceAll('/$','')
.replaceAll("[^/]",'').size()+1)
logger.info "pathDiff: " + pathDiff
collections.each { type, fileNames ->
if (fileNames) {
def outFile = new File(targetDir + '/_includes', type + '_includes.adoc')
logger.info outFile.canonicalPath-sourceFolder.canonicalPath
outFile.write("// this is autogenerated\n")
logger.info "docToolchain> Use Antora integration: ${useAntoraIntegration}"
fileNames.sort().each { fileName ->
if (useAntoraIntegration) {
outFile.append("ifndef::optimize-content[]\n")
outFile.append ("include::../" + pathDiff + scanDir_save + "/" + fileName.replace("\\", "/")+"[]\n")
outFile.append("endif::optimize-content[]\n\n")
outFile.append("ifdef::optimize-content[]\n")
outFile.append ("include::example\$" + fileName.replace("\\", "/").replace("${inputPath}/modules/ROOT/examples/", "")+"[]\n")
outFile.append("endif::optimize-content[]\n\n")
} else {
outFile.append ("include::../" + pathDiff + scanDir_save + "/" + fileName.replace("\\", "/")+"[]\n\n")
}
}
}
}
}
}
3.7. generatePDF
2 minutes to read
At a Glance
About This Task
This task makes use of the asciidoctor-pdf plugin to render your documents as pretty PDF files.
Files are written to build/pdf
.
The PDF is generated directly from your AsciiDoc sources. There is no need for an intermediate format or other tools.
The result looks more like a nicely rendered book than a print-to-PDF HTML page.
For a file to be rendered, it has to be configured in the doctoolchainConfig.groovy
file.
There you will find a section that looks like this:
inputFiles = [
[file: 'manual.adoc', formats: ['html','pdf']],
/** inputFiles **/
]
Add the files that you want to be rendered, along with the desired format.
In this case pdf
.
Hint
Why do you need to configure the files to be rendered?
Asciidoctor renders all .adoc
files by default. It doesn’t matter if they are the main documents or chapters you want to include.
Most people only want to convert selected files to PDF, so that’s why you need to configure which ones.
Creating a Custom PDF Theme
If you want to change colors, fonts or page headers and footers, you can do so by creating a custom-theme.yml
file.
Copy the file src/docs/pdfTheme/custom-theme.yml
from docToolchain to your project and reference it from your main .adoc`file by setting the `:pdf-themesdir:
.
In addition, set the :pdf-theme:
to the name of your theme. In this case custom
.
For example, insert the following at the top of your document to reference custom-theme.yml
from the /pdfTheme
folder.
:pdf-themesdir: ../pdfTheme :pdf-theme: custom
Further Reading and Resources
-
Learn how to modify a theme by reading asciidoctor-pdf theming guide.
-
The Beyond HTML blog post is also an excellent resource if you want to dig a little deeper.
Source
task generatePDF (
type: AsciidoctorTask,
group: 'docToolchain',
description: 'use pdf as asciidoc backend') {
attributes (
'plantUMLDir' : file("${docDir}/${config.outputPath}/pdf/images/plantUML/").path,
)
outputDir = file(targetDir + '/pdf/')
attributes (
'data-uri': 'true',
'plantUMLDir' : file("${docDir}/${config.outputPath}/images/").path,
'imagesoutdir' : file("${docDir}/${config.outputPath}/images/").path
)
def sourceFilesPDF = findSourceFilesByType(['pdf'])
// onlyIf {
// sourceFilesPDF
// }
sources {
sourceFilesPDF.each {
include it.file
logger.info it.file
File useFile = new File(srcDir, it.file)
if (!useFile.exists()) {
throw new Exception ("""
The file $useFile in PDF config does not exist!
Please check the configuration 'inputFiles' in $mainConfigFile.""")
}
}
}
outputOptions {
backends = ['pdf']
}
doFirst {
if (sourceFilesPDF.size()==0) {
throw new Exception ("""
>> No source files defined for type 'pdf'.
>> Please specify at least one inputFile in your docToolchainConfig.groovy
""")
}
}
/**
//check if a remote pdfTheme is defined
def pdfTheme = System.getenv('DTC_PDFTHEME')
def themeFolder = pdfTheme.md5()
if (pdfTheme) {
//check if it is already installed
//TODO: finish this...
}
**/
}
3.8. generateSite
8 minutes to read
About This Task
When you have only one document, the output of generateHTML
might meet your requirements.
But as your documentation grows, and you have multiple documents, you will need a microsite which bundles all of the information.
The generateSite
task uses jBake to create a static site with a landing page, a blog and search.
Pages
The microsite is page-oriented, not document-oriented. If you have already organized your documents by chapter, use them as pages to create a great user experience. The arc42-template sources are a good example.
To include a page in the microsite, add a metadata header to it.
:jbake-menu: arc42
:jbake-title: Solution Strategy
:jbake-order: 4
:jbake-type: page_toc
:jbake-status: published
:filename: 015_tasks/03_task_generateSite.adoc
:toc:
[[section-solution-strategy]]
=== Solution Strategy
Here is an overview of each element:
jbake-menu
The top-level menu’s code for this page.
Defaults to the top-level folder name (without the order prefix) of the .adoc file within the docDir
.
Example: if the top-level folder name is 10_news
it will default to the value news
.
For each code, the display text and the order in the top-level menu can be configured.
jbake-title
The title to be displayed in the drop-down top-level menu. Defaults to the first headline of the file.
jbake-order
Applies a sort order to drop-down entries.
Defaults to a prefixed file number, such as 04_filename.adoc
or to the prefixed number of the second-level folder name.
When nothing is defined the default value is -1
or -987654321
for index
pages.
jbake-type
The page type.
Controls which template is used to render the page.
You will mostly use page
for a full-width page or page_toc
for a page with a table of contents (toc) rendered on the left.
Defaults to page_toc
.
jbake-status
Either draft
or published
.
Only published
pages will be rendered.
Defaults to published
for files with a jbake-order
and draft
for files without jbake-order
or files prefixed with _
.
filename
Required for edit and feedback links (coming soon). Defaults to the filename :-).
ifndef
Fixes the imagesdir according to the nesting level of your docs folder.
Defaults to the main docDir/images
.
toc
For :jbake-type: page_toc
, you need this line to generate the toc.
Start your pages with a == level headline.
You can fix the level offset when you include the page in a larger document with include::chapter.adoc[leveloffset=+1] .
|
Configuration
The configuration follows the convention-over-configuration approach. If you follow the conventions, you don’t have to configure anything. But if you want to, you can override the convention behaviour with a configuration.
Menu
The navigation is organized with following elements:
-
A top level menu.
-
For each item of this top level menu, a section sidebar on the left.
The location of a page in the top-level menu and in the section sidebar depends from:
-
Its location in the folder structure
-
Page attributes
-
Site configurations
Example:
src/docs/
├── 10_foo
│ ├── 10_first.adoc
│ └── 20_second.adoc
└── 20_bar
├── 10_lorem.adoc
├── 20_ipsum
│ ├── 10_topic-A.adoc
│ └── 20_topic-B.adoc
└── 30_delis
├── 10_one.adoc
├── 20_two.adoc
└── index.adoc
The top level folders (10_foo
and 20_bar
) are used to determine to which menu-code the page belongs (foo
and bar
, unless overridden by the :jbake-menu:
inside each page).
In the section sidebar, the navigation tree is determined by the folder structure.
Folders are nodes in the sidebar tree.
Each node can contains pages (leafs) or other folders (child nodes).
The order is controlled by the prefix of the file or folder name (unless overridden by the :jbake-order:
inside each page).
When an index page is present (like 20_bar/30_delis/index.adoc
in the example) then the navigation tree node corresponds to this index page (you can click on it and the title is taken from the page).
When this index.adoc
does not declare a specific order with :jbake-order:
then the order of the parent folder (for the example: 30
because the folder is named 30_delis
).
When the index page is absent (like there is no 20_bar/20_ipsum/index.adoc
in the example) then the name of the folder is used to create the node, and you can not click on the node because no pages is associated with this node.
You can still define the order with the name (for the example 20
because the folder is named 20_ipsum
).
When there is no sub-folder, only a flat list of pages is created.
When an index.adoc page is defined inside the top level folder (like: 10_foo/index.adoc or 20_bar/index.adoc ) then the page will listed in the section navigation tree in the sidebar as any other regular page. By default it will be the first element of the tree, unless the value is overridden by a :jbake-order: attribute.
|
The :jbake-menu:
is only the code for the menu entry to be created.
You can map these codes to menu entries through the configuration (microsite
-Section) in the following way:
menu = [code1: 'Some Title 1', code2: 'Other Title 2', code3: '-']
When no mapping is defined in the configuration file, the code is used as title.
The menu configuration is also impacting the display order.
If you have four files, located in following folder structure:
src/docs
├── code1
│ ├── demo1.adoc
│ └── demo2.adoc
└── code3
├── demo3.adoc
└── _demo4.adoc
Where demo1.adoc
and demo3.adoc
contain no :jbake-menu:
header, demo2.adoc
contains :jbake-menu: code2
, then:
-
demo1.adoc
will have a menu-code ofcode1
because it is located in the foldercode1
. This code is translated through the configuration to the menu namedSome Title 1
. -
demo2.adoc
is in the same folder, but the:jbake-menu:
attribute has a higher precedence which results in menu-codecode2
. This code is translated through the configuration to the menu namedOther Title 2
. -
demo3.adoc
will have a menu-codecode3
because it is located in the foldercode3
. This code is translated through the configuration to the special menu-
which will not be displayed. This is an easy way to hide a menu in the rendered microsite. -
_demo4.adoc
starts with an underscore_
and thus will be handled asdraft
(:jbake-status: draft
). It will not be rendered as part of any menu, but it will be available in the microsite as "hidden"_demo4-draft.html
. Feel free to remove these draft renderings before you publish your microsite.
Links
In the column on the right, links are driven by the values defined in docToolchainConfig.groovy
.
-
"Improve this doc": displayed when
gitRepoUrl
is set. -
"Create an issue": displayed when
issueUrl
is set.
Configuring the JBake plugin
Behind the scene the generateSite
task is relying on Jbake.
In the docToolchainConfig.groovy
it is possible to amend the configuration of the jbake gradle plugin:
-
Add additional asciidoctorj plugins (add dependencies to the
jbake
configuration) -
Add additional asciidoctor attributes
//customization of the Jbake gradle plugin used by the generateSite task
jbake.with {
// possibility to configure additional asciidoctorj plugins used by jbake
plugins = [ ]
// possibiltiy to configure additional asciidoctor attributes passed to the jbake task
asciidoctorAttributes = [ ]
}
The plugins are retrieved from a repository (by default maven-central) configured with project property depsMavenRepository
.
When a repository requiring credentials is used the properties depsMavenUsername
and depsMavenPassword
can be set as well.
Templates and Style
The jBake templates and CSS are hidden for convenience.
The basic template uses Twitter Bootstrap 5 as its CSS framework.
Use the copyThemes
task to copy all hidden jBake resources to your project.
You can then remove the resources you don’t need, and change those you want to change.
copyThemes overwrites existing files, but because your code is safely managed using version control, this shouldn’t be a problem.
|
Landing Page
Place an index.gsp
page as your landing page in src/site/templates
.
This landing page is plain HTML5 styled with Twitter Bootstrap.
The page header and footer are added by docToolchain.
An example can be found at copyThemes
or on GitHub.
Blog
The microsite also contains a simple but powerful blog. Use it to inform your team about changes, as well as architecture decision records (ADRs).
To create a new blog post, create a new file in src/docs/blog/<year>/<post-name>.adoc
with the following template:
:jbake-title: <title-of your post>
:jbake-date: <date formatted as 2021-02-28>
:jbake-type: post
:jbake-tags: <blog, asciidoc>
:jbake-status: published
:imagesdir: ../../images
== {jbake-title}
{jbake-author}
{jbake-date}
<insert your text here>
Search
The microsite does not have its own local search. But it does have a search input field which can be used to link to another search engine.
CI/CD
When running in an automated build, set the environment variable DTC_HEADLESS
to true
or 1
. This stops docToolchain from asking to install the configured theme, and it will simply assume that you do want to install it.
You can also avoid the theme downloading with every build by copying the themes folder from $HOME/.doctoolchain/themes
to the corresponding folder in your build container.
Further Reading and Resources
Read about the previewSite
task here.
Source
import groovy.util.*
import static groovy.io.FileType.*
buildscript {
repositories {
maven {
credentials {
username mavenUsername
password mavenPassword
}
url mavenRepository
}
}
dependencies {
classpath libs.asciidoctorj.diagram
}
}
repositories {
maven {
credentials {
username depsMavenUsername
password depsMavenPassword
}
url depsMavenRepository
}
}
dependencies {
jbake libs.asciidoctorj.diagram
jbake libs.pebble
config.jbake.plugins.each { plugin ->
jbake plugin
}
}
apply plugin: 'org.jbake.site'
def color = { color, text ->
def colors = [black: 30, red: 31, green: 32, yellow: 33, blue: 34, magenta: 35, cyan: 36, white: 37]
return new String((char) 27) + "[${colors[color]}m${text}" + new String((char) 27) + "[0m"
}
jbake {
version = '2.6.7'
srcDirName = "${targetDir}/microsite/tmp/site"
destDirName = "${targetDir}/microsite/output"
configuration['asciidoctor.option.requires'] = "asciidoctor-diagram"
config.microsite.each { key, value ->
configuration['site.'+key-'config.microsite.'] = value?:''
//println 'site.'+key-'config.microsite.' +" = "+ value
}
def micrositeContextPath = config.microsite.contextPath?:'/'
configuration['asciidoctor.attributes'] = [
"sourceDir=${targetDir}",
'source-highlighter=prettify@',
//'imagesDir=../images@',
"imagesoutDir=${targetDir}/microsite/output/images@",
"imagesDir=${micrositeContextPath.endsWith('/') ? micrositeContextPath : micrositeContextPath.concat('/')}images@",
"targetDir=${targetDir}",
"docDir=${docDir}",
"projectRootDir=${new File(docDir).canonicalPath}@",
]
if(config.jbake.asciidoctorAttributes) {
config.jbake.asciidoctorAttributes.each { entry ->
configuration['asciidoctor.attributes'] << entry
}
}
}
def prepareAndCopyTheme = {
//copy internal theme
println "copy internal theme ${new File(projectDir, 'src/site').canonicalPath}"
copy {
from('src/site')
into("${targetDir}/microsite/tmp/site")
}
//check if a remote pdfTheme is defined
def siteTheme = System.getenv('DTC_SITETHEME')?:""
def themeFolder = new File(projectDir, "../themes/" + siteTheme.md5())
try {
if (siteTheme) {
println "use siteTheme $siteTheme"
//check if it is already installed
if (!themeFolder.exists()) {
if (System.getenv('DTC_HEADLESS')) {
ant.yesno = "y"
} else {
println "${color 'green', """\nTheme '$siteTheme' is not installed yet. """}"
def input = ant.input(message: """
${color 'green', 'do you want me to download and install it to '}
${color 'green', ' ' + themeFolder.canonicalPath}
${color 'green', 'for you?'}\n""",
validargs: 'y,n', addproperty: 'yesno')
}
if (ant.yesno == "y") {
themeFolder.mkdirs()
download.run {
src siteTheme
dest new File(themeFolder, 'siteTheme.zip')
overwrite true
}
copy {
from zipTree(new File(themeFolder, 'siteTheme.zip'))
into themeFolder
}
delete {
delete new File(themeFolder, 'siteTheme.zip')
}
} else {
println "${color 'green', """\nI will continue without the theme for now... """}"
siteTheme = ""
}
}
//copy external theme
if (siteTheme) {
copy {
from(themeFolder) {}
into("${targetDir}/microsite/tmp/")
}
//check if the config has to be updated
// check if config still contains /** microsite **/
def configFile = new File(docDir, mainConfigFile)
def configFileText = configFile.text
if (configFileText.contains("/** start:microsite **/")) {
def configFragment = new File(targetDir,'/microsite/tmp/site/configFragment.groovy')
if (configFragment.exists()) {
println "${color 'green', """
It seems that this theme is used for the first time in this project.
Let's configure it!
If you are unsure, change these settings later in your config file
$configFile.canonicalPath
"""}"
def comment = ""
def conf = ""
def example = ""
def i = 0
configFragment.eachLine { line ->
if (line.trim()) {
if (line.startsWith("//")) {
conf += " " + line + "\n"
def tmp = line[2..-1].trim()
comment += color('green', tmp) + "\n"
if (tmp.toLowerCase().startsWith("example")) {
example = tmp.replaceAll("[^ ]* ", "")
}
} else {
//only prompt if there is something to prompt
if (line.contains("##")) {
def property = line.replaceAll("[ =].*", "")
if (!example) {
example = config.microsite[property]
}
comment = color('blue', "$property") + "\n" + comment
if (example) {
ant.input(message: comment,
addproperty: 'res' + i, defaultvalue: example)
} else {
ant.input(message: comment,
addproperty: 'res' + i)
}
(comment, example) = ["", ""]
line = line.replaceAll("##.+##", ant['res' + i])
conf += " " + line + "\n"
i++
} else {
conf += " " + line + "\n"
}
}
} else {
conf += "\n"
}
}
configFile.write(configFileText.replaceAll("(?sm)/[*][*] start:microsite [*][*]/.*/[*][*] end:microsite [*][*]/", "%%marker%%").replace("%%marker%%", conf))
println color('green', "config written\nopen ${targetDir}/microsite/output/index.html in your browser\nto see your microsite!")
}
//copy the dummy docs (blog, landing page) to the project repository
copy {
from(new File(themeFolder, 'site/doc')) {}
into(new File(docDir, inputPath))
}
}
}
}
} catch (Exception e) {
println color('red', e.message)
if (e.message.startsWith("Not Found")) {
themeFolder.deleteDir()
throw new GradleException("Couldn't find theme. Did you specify the right URL?\n"+e.message)
} else {
throw new GradleException(e.message)
}
}
//copy project theme
if (config.microsite.siteFolder) {
def projectTheme = new File(new File(docDir, inputPath), config.microsite.siteFolder)
println "copy project theme ${projectTheme.canonicalPath}"
copy {
from(projectTheme) {}
into("${targetDir}/microsite/tmp/site")
}
}
}
def convertAdditionalFormats = {
if (config.microsite.additionalConverters) {
File sourceFolder = new File(targetDir, '/microsite/tmp/site/doc')
sourceFolder.traverse(type: FILES) { file ->
def extension = '.' + file.name.split("[.]")[-1]
if (config.microsite.additionalConverters[extension]) {
def command = config.microsite.additionalConverters[extension].command
def type = config.microsite.additionalConverters[extension].type
def binding = new Binding([
file : file,
config: config
])
def shell = new GroovyShell(getClass().getClassLoader(), binding)
switch (type) {
case 'groovy':
shell.evaluate(command)
break
case 'groovyFile':
shell.evaluate(new File(docDir, command).text)
break
case 'bash':
if (command=='dtcw:rstToHtml.py') {
// this is an internal script
command = projectDir.canonicalPath+'/scripts/rstToHtml.py'
}
command = ['bash', '-c', command + ' "' + file + '"']
def process = command.execute([], new File(docDir))
process.waitFor()
if (process.exitValue()) {
def error = process.err.text
println """
can't convert '${file.canonicalPath-docDir-'/build/microsite/tmp/site/doc'}':
${error}
"""
throw new Exception("""
can't convert '${file.canonicalPath-docDir-'/build/microsite/tmp/site/doc'}':
${error}
""")
}
}
}
}
}
}
def parseAsciiDocAttribs = { origText, jbake ->
def parseAttribs = true
def text = ""
def beforeToc = ""
origText.eachLine { line ->
if (parseAttribs && line.startsWith(":jbake")) {
def parsedJbakeAttribute = (line - ":jbake-").split(": +", 2)
if(parsedJbakeAttribute.length != 2) {
logger.warn("jbake-attribute is not valid or Asciidoc conform: $line")
logger.warn("jbake-attribute $line will be ignored, trying to continue...")
} else {
jbake[parsedJbakeAttribute[0]] = parsedJbakeAttribute[1]
}
} else {
if (line.startsWith("[")) {
// stop parsing jBake-attribs when a [source] - block starts which might contain those attribs as example
parseAttribs = false
}
text += line+"\n"
//there are some attributes which have to be set before the toc
if (line.startsWith(":toc") ) {
beforeToc += line+"\n"
}
}
}
return [text, beforeToc]
}
def parseOtherAttribs = { origText, jbake ->
if (origText.contains('~~~~~~')) {
def parseAttribs = true
def text = ""
origText.eachLine { line ->
if (parseAttribs && line.contains("=")) {
line = (line - "jbake-").split("=", 2)
jbake[line[0]] = line[1]
} else {
if (line.startsWith("~~~~~~")) {
// stop parsing jBake-attribs when delimiter shows up
parseAttribs = false
} else {
text += line + "\n"
}
}
}
return text
} else {
return origText
}
}
def renderHeader = { fileName, jbake ->
def header = ''
if (fileName.toLowerCase() ==~ '^.*(html|md)$') {
jbake.each { key, value ->
if (key == 'order') {
header += "jbake-${key}=${(value ?: '1') as Integer}\n"
} else {
if (key in ['type', 'status']) {
header += "${key}=${value}\n"
} else {
header += "jbake-${key}=${value}\n"
}
}
}
header += "~~~~~~\n\n"
} else {
jbake.each { key, value ->
if (key == 'order') {
header += ":jbake-${key}: ${(value ?: '1') as Integer}\n"
} else {
header += ":jbake-${key}: ${value}\n"
}
}
}
return header
}
def fixMetaDataHeader = {
//fix MetaData-Header
File sourceFolder = new File(targetDir, '/microsite/tmp/site/doc')
logger.info("sourceFolder: " + sourceFolder.canonicalPath)
sourceFolder.traverse(type: FILES) { file ->
if (file.name.toLowerCase() ==~ '^.*(ad|adoc|asciidoc|html|md)$') {
if (file.name.startsWith("_") || file.name.startsWith(".")) {
//ignore
} else {
def origText = file.text
//parse jbake attributes
def text = ""
def jbake = [
status: "published",
order: -1,
type: 'page_toc'
]
if (file.name.toLowerCase() ==~ '^.*(md|html)$') {
// we don't have a toc for md or html
jbake.type = 'page'
}
def beforeToc = ""
if (file.name.toLowerCase() ==~ '^.*(ad|adoc|asciidoc)$') {
(text, beforeToc) = parseAsciiDocAttribs(origText, jbake)
} else {
text = parseOtherAttribs(origText, jbake)
}
def name = file.canonicalPath - (sourceFolder.canonicalPath+File.separator)
if (File.separator=='\\') {
name = name.split("\\\\")
} else {
name = name.split("/")
}
if (name.size()>1) {
if (!jbake.menu) {
jbake.menu = name[0]
if (jbake.menu ==~ /[0-9]+[-_].*/) {
jbake.menu = jbake.menu.split("[-_]", 2)[1]
}
}
def docname = name[-1]
if (docname ==~ /[0-9]+[-_].*/) {
jbake.order = docname.split("[-_]",2)[0]
docname = docname.split("[-_]",2)[1]
}
if (name.size() > 2) {
if ((jbake.order as Integer)==0) {
// let's take the order from the second level dir or file and not the file
def secondLevel = name[1]
if (secondLevel ==~ /[0-9]+[-_].*/) {
jbake.order = secondLevel.split("[-_]",2)[0]
}
} else {
if (((jbake.order?:'1') as Integer) > 0) {
//
} else {
jbake.status = "draft"
}
}
}
if (jbake.order==-1 && docname.startsWith('index')) {
jbake.order = -987654321 // special 'magic value' given to index pages.
jbake.status = "published"
}
// news blog
if (jbake.order==-1 && jbake.type=='post') {
jbake.order = 0
try {
jbake.order = Date.parse("yyyy-MM-dd", jbake.date).time / 100000
} catch ( Exception e) {
System.out.println "unparsable date ${jbake.date} in $name"
}
jbake.status = "published"
}
def leveloffset = 0
if (file.name.toLowerCase() ==~ '^.*(ad|adoc|asciidoc)$') {
text.eachLine { line ->
if (!jbake.title && line ==~ "^=+ .*") {
jbake.title = (line =~ "^=+ (.*)")[0][1]
def level = (line =~ "^(=+) .*")[0][1]
if (level == "=") {
leveloffset = 1
}
}
}
} else {
if (file.name.toLowerCase() ==~ '^.*(html)$') {
if (!jbake.title) {
text.eachLine { line ->
if (!jbake.title && line ==~ "^<h[1-9]>.*</h.*") {
jbake.title = (line =~ "^<h[1-9]>(.*)</h.*")[0][1]
}
}
}
} else {
// md
if (!jbake.title) {
text.eachLine { line ->
if (!jbake.title && line ==~ "^#+ .*") {
jbake.title = (line =~ "^#+ (.*)")[0][1]
}
}
}
}
}
if (!jbake.title) {
jbake.title = docname
}
if (leveloffset==1) {
//leveloffset needed
// we always start with "==" not with "="
// only used for adoc
text = text.replaceAll("(?ms)^(=+) ", '$1= ')
}
if (config.microsite.customConvention) {
def binding = new Binding([
file : file,
sourceFolder : sourceFolder,
config: config,
headers : jbake
])
def shell = new GroovyShell(getClass().getClassLoader(), binding)
shell.evaluate(config.microsite.customConvention)
System.out.println jbake
}
def header = renderHeader(file.name, jbake)
if (file.name.toLowerCase() ==~ '^.*(ad|adoc|asciidoc)$') {
file.write(header + "\nifndef::dtc-magic-toc[]\n:dtc-magic-toc:\n$beforeToc\n\n:toc: left\n\n++++\n<!-- endtoc -->\n++++\nendif::[]\n" + text, "utf-8")
} else {
file.write(header + "\n" + text, "utf-8")
}
}
}
}
}
}
task generateSite(
group: 'docToolchain',
description: 'generate a microsite using jBake.') {
doLast {
new File("${targetDir}/microsite/tmp").mkdirs()
println new File("${targetDir}/microsite/tmp/").canonicalPath
prepareAndCopyTheme()
//copy docs
copy {
from(new File(docDir, inputPath)) {}
into("${targetDir}/microsite/tmp/site/doc")
}
// if configured, convert restructuredText or anything else
convertAdditionalFormats()
// convention over configuration
fixMetaDataHeader()
}
}
task previewSite(
group: 'docToolchain',
dependsOn: [],
description: 'preview your Microsite',
) {
doLast {
println("previewSite command has been deprecated.")
println("To preview your site, open ${targetDir}/microsite/output/index.html in your browser.")
println("To read alternative ways to preview your site, please consult the documentation.")
}
}
previewSite.dependsOn(generateSite)
previewSite.mustRunAfter(bake)
task copyImages(type: Copy) {
config.imageDirs.each { imageDir ->
from(new File (new File(docDir, inputPath),imageDir)) {}
logger.info ('imageDir: '+imageDir)
into("${targetDir}/microsite/output/images")
}
config.resourceDirs.each { resource ->
from(new File(file(srcDir),resource.source))
logger.info ('resource: '+resource.source)
into("${targetDir}/microsite/output/" + resource.target)
}
}
bake.dependsOn copyImages
generateSite.finalizedBy bake
3.9. generateDocbook
1 minute to read
At a Glance
About This Task
A helper task, generateDocbook generates the intermediate format for convertToDocx
<<>> and convertToEpub
.
Source
task generateDocbook (
type: AsciidoctorTask,
group: 'docToolchain',
description: 'use docbook as asciidoc backend') {
def sourceFilesDOCBOOK = findSourceFilesByType(['docbook', 'epub', 'docx'])
// onlyIf {
// sourceFilesDOCBOOK
// }
sources {
sourceFilesDOCBOOK.each {
include it.file
logger.info it.file
File useFile = new File(srcDir, it.file)
if (!useFile.exists()) {
throw new Exception ("""
The file $useFile in DOCBOOK config does not exist!
Please check the configuration 'inputFiles' in $mainConfigFile.""")
}
}
}
outputOptions {
backends = ['docbook']
}
outputDir = file(targetDir+'/docbook/')
doFirst {
if (sourceFilesDOCBOOK.size()==0) {
throw new Exception ("""
>> No source files defined for type of '[docbook, epub, docx]'.
>> Please specify at least one inputFile in your docToolchainConfig.groovy
""")
}
}
}
3.10. generateDeck
1 minute to read
At a Glance
About This Task
This task makes use of the asciidoctor-reveal.js backend to render your documents into a HTML-based presentation.
It creates a PowerPoint presentation, then enriches it by adding reveal.js slide definitions in AsciiDoc to the speaker notes.
For best results, use this task with the exportPPT
task.
Configure RevealJs
docToolchain comes with some opinionated, sane defaults for RevealJs. You can overwrite any of them and provide further configuration as per asciidoctor-reveal.js documentation.
Source
task generateDeck (
type: AsciidoctorJRevealJSTask,
group: 'docToolchain',
description: 'use revealJs as asciidoc backend to create a presentation') {
// corresponding Asciidoctor reveal.js config
// :revealjs_theme:
theme = 'black'
revealjsOptions {
// :revealjs_hideAddressBar:
hideAddressBarOnMobile = 'true'
// :revealjs_history:
pushToHistory = 'true'
// :revealjs_progress:
progressBar = 'true'
// :revealjs_slideNumber:
slideNumber = 'true'
// :revealjs_touch:
touchMode = 'true'
// :revealjs_transition:
transition = 'linear'
}
attributes (
'idprefix': 'slide-',
'idseparator': '-',
'docinfo1': '',
)
def sourceFilesREVEAL = findSourceFilesByType(['revealjs'])
sources {
sourceFilesREVEAL.each {
include it.file
logger.info it.file
File useFile = new File(srcDir, it.file)
if (!useFile.exists()) {
throw new Exception ("""
The file $useFile in REVEAL config does not exist!
Please check the configuration 'inputFiles' in $mainConfigFile.""")
}
}
}
outputDir = file(targetDir+'/decks/')
resources {
from(sourceDir) {
include 'images/**'
}
into("")
logger.info "${docDir}/${config.outputPath}/images"
}
doFirst {
if (sourceFilesREVEAL.size()==0) {
throw new Exception ("""
>> No source files defined for type 'revealjs'.
>> Please specify at least one inputFile in your docToolchainConfig.groovy
""")
}
}
}
generateDeck.dependsOn asciidoctorGemsPrepare
3.11. publishToConfluence
10 minutes to read
At a Glance
About This Task
This task takes a generated HTML file, splits it by headline, and pushes it to your instance of Confluence. This lets you use the docs-as-code approach even if your organisation insists on using Confluence as its main document repository.
From the 01.01.2024 on, Atlassian turns off API V1 for Confluence Cloud, if there is a V2 equivalent. docToolchain versions from 3.1 on support API V2. If you are using an older version of docToolchain, you’ll need to upgrade to a newer version.
To enable API V2, set |
Currently, docToolchain only has full support for the old Confluence editor. The new editor is not fully supported yet. You can use the new editor, but you may experience some unexpected layout issues/ changes. To make use of the new editor you need to set |
Special Features
Easy Code Block Conversion
[source]
-blocks are converted to code-macro blocks in Confluence.
Confluence supports a very limited list of languages supported for code block syntax highlighting. When specifying an unknown language, it would even display an error. Therefore, some transformation is applied.
-
If no language is given in the source block, it is explicitly set to plain text (because the default would be Java that might not always apply).
-
Some known and common AsciiDoc source languages are mapped to Confluence code block languages.
source target note json
yml
produces an acceptable highlighting
shell
bash
only a specific shell is supported
yaml
yml
different name of language
-
If the language of the source block is not supported by Confluence, it is set to plain text as fallback to avoid the error.
Get a list of valid languages (and learn how to add others) here. |
Minimal Impact on Non-Techie Confluence Users
Only pages and images that changed between task runs are published, and only those changes are notified to page watchers, cutting down on 'spam'.
Keywords Automatically Attached as Labels
:keywords:
are attached as labels to every Confluence page generated using the publishToConfluence
task.
See Atlassian’s own guidelines on labels.
Several keywords are allowed, and they must be separated by commas. For example: :keywords: label_1, label-2, label3, …
.
Labels (keywords) must not contain a space character. Use either '_' or '-'.
Configuration
You configure the publishToConfluence task in the file docToolchainConfig.groovy. It is located in the root of your project folder. We try to make the configuration self-explanatory, but below is some more information about each config option.
input
is an array of files to upload to Confluence with the ability to configure a different parent page for each file.
Attributes
-
file
: absolute or relative path to the asciidoc generated html file to be exported -
url
: absolute URL to an asciidoc generated html file to be exported -
ancestorName
(optional): the name of the parent page in Confluence as string; this attribute has priority over ancestorId, but if page with given name doesn’t exist, ancestorId will be used as a fallback -
ancestorId
(optional): the id of the parent page in Confluence as string; leave this empty if a new parent shall be created in the space
The following four keys can also be used in the global section below
-
spaceKey
(optional): page specific variable for the key of the confluence space to write to -
subpagesForSections
(optional): The number of nested sub-pages to create. Default is '1'. '0' means creating all on one page. The following migration for removed configuration can be used.-
allInOnePage = true
is the same assubpagesForSections = 0
-
allInOnePage = false && createSubpages = false
is the same assubpagesForSections = 1
-
allInOnePage = false && createSubpages = true
is the same assubpagesForSections = 2
-
-
pagePrefix
(optional): page specific variable, the pagePrefix will be a prefix for the page title and it’s sub-pages use this if you only have access to one confluence space but need to store several pages with the same title - a different pagePrefix will make them unique -
pageSuffix
(optional): same usage as prefix but appended to the title and it’s subpages
only 'file' or 'url' is allowed. If both are given, 'url' is ignored
ancestorId
The page ID of the parent page where you want your docs to be published. Go to this page, click Edit and the required ID will show up in the URL. Specify the ID as a string within the config file.
api
Endpoint of the confluenceAPI (REST) to be used and looks like https://[yourServer]/[context]
, while [context]
is optional.
If you use Confluence Cloud, you can omit the context.
If you use Confluence Server, you may need to set a context, depending on your Confluence configuration.
rateLimit (since 3.2.0), The rate limit for Confluence requests. Default is 10 requests per second.
useV1Api
This feature is available for docToolchain >= 3.1 only
If you set this to false , ensure the api config is set to https://[yourCloudDomain] . (Mind no context given here)
|
If you are using Confluence Cloud, you can set this to false
to use the new API V2. If you are using Confluence Server, you can set this to true
to use the old API V1. If you are using Confluence Cloud and set this to false
, you will get an error message, once Atlassian turns off API V1 (starting 01.01.2024).
enforceNewEditor
Atlassian is currently rolling out a new editor for Confluence. If you want to use the new editor, you can set this to true
. If you are using the old editor, you can set this to false
. If you are using the new editor, you may experience some unexpected layout issues/ changes, since the new editor has yet no feature parity and therefore may be incompatible.
disableToC
This boolean configuration determines whether the table of contents (ToC) is disabled on the page once uploaded to Confluence. false
by default, so the ToC is active.
pagePrefix/pageSuffix
Confluence can’t handle two pages with the same name - even with different casing (lowercase, UPPERCASE, a mix).
This script matches pages regardless of case and refuses to replace a page whose name differs from an existing page only by casing.
Ideally, you should create a new Confluence space for each piece of larger documentation.
If you are restricted and can’t create new spaces, you can use pagePrefix
/pageSuffix
to define a prefix/suffix for the doc so that it doesn’t conflict with other page names.
pageVersionComment
Set an optional comment for the new page version in Confluence.
credentials
For security reasons it is highly recommended to store your credentials in a separate file outside the Git repository, such as in your Home folder.
To authenticate with username and API token, use: credentials = "user:${new File("/users/me/apitoken").text}" or credentials = "user:${new File("/users/me/apitoken").text}"`.bytes.encodeBase64().toString()` to …….. You can create an API-token in your profile.
To authenticate with username and password, use: credentials = ……
You can also set your username, password of apitoken as an environment variable. You then do the following: 1. Open the file that contains the environment variables: a. On a Mac, go to your Home folder and open the file .zpfrofile. 2. ….
If you wish to simplify the injection of credentials from external sources, do the following:
1. In docToolchainConfig.groovy, do not enter the credentials. Make sure the credentials are escaped.
2. Create a gradle.properties
file in the project or home directory. See the gradle user guide.
3. Open the file, and put the variables in it:
- confluenceUser=myusername, and on a new line
- confluencePass=myuserpassword
apikey
In situations where you have to use full user authorisation because of internal Confluence permission handling, you’ll need to add the API-token in addition to the credentials.
The API-token cannot be added to the credentials because it’s used for user and password exchange.
Therefore the API-token can be added as parameter apikey
, which makes the addition of the token a separate header field with key: keyId
and value of apikey
.
An example (including storing of the real value outside this configuration) is: apikey = "${new File("/home/me/apitoken").text}"
.
bearerToken
You can pass a Confluence
Personal Access Token as the bearerToken
. It is an alternative to
credentials
. Do not confuse it with apiKey
.
extraPageContent
If you need to prefix your pages with a warning stating that 'this is generated content', this is where you do it.
enableAttachments
If value is set to true
, any links to local file references will be uploaded as attachments. The current implementation only supports a single folder, the name of which will be used as a prefix to validate whether or not your file should be uploaded.
If you enable this feature, and use a folder which starts with 'attachment', an adaption of this prefix is required.
pageLimit
Limits the number of pages retrieved from the server to check if a page with this name already exists.
jiraServerId
Stores the Jira server ID that your Confluence instance is connected to. If a value is set, all anchors pointing to a Jira ticket will be replaced by the Confluence Jira macro.
To function properly, jiraRoot
must be configured (see exportJiraIssues
). Here’s an example:
All files to attach will need to be linked inside the document:
link:attachment/myfolder/myfile.json[My API definition]
attachmentPrefix
Stores the expected foldername of your output directory. Default is attachment
.
proxy
If you need to provide proxy to access Confluence, you can set a map with the keys host
(e.g. 'my.proxy.com'
), port
(e.g. '1234'
) and schema
(e.g. 'http'
) of your proxy.
useOpenapiMacro
If this option is present and equal to confluence-open-api
or swagger-open-api
then any source block marked with class openapi
will be wrapped in the Elitesoft Swagger Editor macro (see Elitesoft Swagger Editor). The key depends on the version of the macro.
For backward compatibility, if this option is present and equal to true
, then again the Elitesoft Swagger Editor macro will be used.
If this option is present and equal to "open-api" then any source block marked with class openapi will be wrapped in Open API Documentation for Confluence macro: (see Open API Documentation for Confluence). A download source (yaml) button is shown by default.
Using the plugin can be handled on different ways.
-
copy/paste the content of the YAML file to the plugin without linking to the origin source by using the url to the YAML file
[source.openapi,yaml]
----
\include::https://my-domain.com/path-to-yaml[]
----
-
copy/paste the content of the YAML file to the plugin without linking to the origin source by using a YAML file in your project structure:
[source.openapi,yaml]
----
\include::my-yaml-file.yaml[]
----
-
create a link between the plugin and the YAML file without copying the content into the plugin. The advantage following this way is that even in case the API specification is changed without re-generating the documentation, the new version of the configuration is used in Confluence.
[source.openapi,yaml,role="url:https://my-domain.com/path-to-yaml"]
----
\include::https://my-domain.com/path-to-yaml[]
----
//Configureation for publishToConfluence
confluence = [:]
// 'input' is an array of files to upload to Confluence with the ability
// to configure a different parent page for each file.
//
// Attributes
// - 'file': absolute or relative path to the asciidoc generated html file to be exported
// - 'url': absolute URL to an asciidoc generated html file to be exported
// - 'ancestorName' (optional): the name of the parent page in Confluence as string;
// this attribute has priority over ancestorId, but if page with given name doesn't exist,
// ancestorId will be used as a fallback
// - 'ancestorId' (optional): the id of the parent page in Confluence as string; leave this empty
// if a new parent shall be created in the space
// Set it for every file so the page scanning is done only for the given ancestor page trees.
//
// The following four keys can also be used in the global section below
// - 'spaceKey' (optional): page specific variable for the key of the confluence space to write to
// - 'subpagesForSections' (optional): The number of nested sub-pages to create. Default is '1'.
// '0' means creating all on one page.
// The following migration for removed configuration can be used.
// 'allInOnePage = true' is the same as 'subpagesForSections = 0'
// 'allInOnePage = false && createSubpages = false' is the same as 'subpagesForSections = 1'
// 'allInOnePage = false && createSubpages = true' is the same as 'subpagesForSections = 2'
// - 'pagePrefix' (optional): page specific variable, the pagePrefix will be a prefix for the page title and it's sub-pages
// use this if you only have access to one confluence space but need to store several
// pages with the same title - a different pagePrefix will make them unique
// - 'pageSuffix' (optional): same usage as prefix but appended to the title and it's subpages
// only 'file' or 'url' is allowed. If both are given, 'url' is ignored
confluence.with {
input = [
[ file: "build/docs/html5/arc42-template-de.html" ],
]
// endpoint of the confluenceAPI (REST) to be used
// https://[yourServer]
api = 'https://[yourServer]'
// requests per second for confluence API calls
rateLimit = 10
// Additionally, spaceKey, subpagesForSections, pagePrefix and pageSuffix can be globally defined here. The assignment in the input array has precedence
// the key of the confluence space to write to
spaceKey = 'asciidoc'
// if true, all pages will be created using the new editor v2
// enforceNewEditor = false
// variable to determine how many layers of sub pages should be created
subpagesForSections = 1
// the pagePrefix will be a prefix for each page title
// use this if you only have access to one confluence space but need to store several
// pages with the same title - a different pagePrefix will make them unique
pagePrefix = ''
pageSuffix = ''
/*
WARNING: It is strongly recommended to store credentials securely instead of commiting plain text values to your git repository!!!
Tool expects credentials that belong to an account which has the right permissions to to create and edit confluence pages in the given space.
Credentials can be used in a form of:
- passed parameters when calling script (-PconfluenceUser=myUsername -PconfluencePass=myPassword) which can be fetched as a secrets on CI/CD or
- gradle variables set through gradle properties (uses the 'confluenceUser' and 'confluencePass' keys)
Often, same credentials are used for Jira & Confluence, in which case it is recommended to pass CLI parameters for both entities as
-Pusername=myUser -Ppassword=myPassword
*/
//optional API-token to be added in case the credentials are needed for user and password exchange.
//apikey = "[API-token]"
// HTML Content that will be included with every page published
// directly after the TOC. If left empty no additional content will be
// added
// extraPageContent = '<ac:structured-macro ac:name="warning"><ac:parameter ac:name="title" /><ac:rich-text-body>This is a generated page, do not edit!</ac:rich-text-body></ac:structured-macro>
extraPageContent = ''
// enable or disable attachment uploads for local file references
enableAttachments = false
// default attachmentPrefix = attachment - All files to attach will require to be linked inside the document.
// attachmentPrefix = "attachment"
// Optional proxy configuration, only used to access Confluence
// schema supports http and https
// proxy = [host: 'my.proxy.com', port: 1234, schema: 'http']
// Optional: specify which Confluence OpenAPI Macro should be used to render OpenAPI definitions
// possible values: ["confluence-open-api", "open-api", "swagger-open-api", true]. true is the same as "confluence-open-api" for backward compatibility
// useOpenapiMacro = "confluence-open-api"
}
CSS Styling
Some AsciiDoctor features depend on specific CSS style definitions. Unless these styles are defined, some formatting that is present in the HTML version will not be represented when published to Confluence. To configure Confluence to include additional style definitions:
-
Log in to Confluence as a space admin.
-
Go to the desired space.
-
Select Space tools > Look and Feel > Stylesheet.
-
Click Edit then enter the desired style definitions.
-
Click Save.
The default style definitions can be found in the AsciiDoc project as asciidoctor-default.css. You will most likely NOT want to include the entire thing, as some of the definitions are likely to disrupt Confluence’s layout.
The following style definitions are Confluence-compatible, and will enable the use of the built-in roles (big
/small
, underline
/overline
/line-through
, COLOR
/COLOR-background
for the sixteen HTML color names):
.big{font-size:larger}
.small{font-size:smaller}
.underline{text-decoration:underline}
.overline{text-decoration:overline}
.line-through{text-decoration:line-through}
.aqua{color:#00bfbf}
.aqua-background{background-color:#00fafa}
.black{color:#000}
.black-background{background-color:#000}
.blue{color:#0000bf}
.blue-background{background-color:#0000fa}
.fuchsia{color:#bf00bf}
.fuchsia-background{background-color:#fa00fa}
.gray{color:#606060}
.gray-background{background-color:#7d7d7d}
.green{color:#006000}
.green-background{background-color:#007d00}
.lime{color:#00bf00}
.lime-background{background-color:#00fa00}
.maroon{color:#600000}
.maroon-background{background-color:#7d0000}
.navy{color:#000060}
.navy-background{background-color:#00007d}
.olive{color:#606000}
.olive-background{background-color:#7d7d00}
.purple{color:#600060}
.purple-background{background-color:#7d007d}
.red{color:#bf0000}
.red-background{background-color:#fa0000}
.silver{color:#909090}
.silver-background{background-color:#bcbcbc}
.teal{color:#006060}
.teal-background{background-color:#007d7d}
.white{color:#bfbfbf}
.white-background{background-color:#fafafa}
.yellow{color:#bfbf00}
.yellow-background{background-color:#fafa00}
Source
task publishToConfluence(
description: 'publishes the HTML rendered output to confluence',
group: 'docToolchain'
) {
doLast {
logger.info("docToolchain> docDir: "+docDir)
config.confluence.api = findProperty("confluence.api")?:config.confluence.api
//TODO default should be false, if the V1 has been removed in cloud
config.confluence.useV1Api = findProperty("confluence.useV1Api") != null ?
findProperty("confluence.useV1Api") : config.confluence.useV1Api != [:] ?
config.confluence.useV1Api :true
binding.setProperty('config',config)
binding.setProperty('docDir',docDir)
evaluate(new File(projectDir, 'core/src/main/groovy/org/docToolchain/scripts/asciidoc2confluence.groovy'))
}
}
package org.docToolchain.scripts
import org.docToolchain.atlassian.transformer.HtmlTransformer
/**
* Created by Ralf D. Mueller and Alexander Heusingfeld
* https://github.com/rdmueller/asciidoc2confluence
*
* this script expects an HTML document created with AsciiDoctor
* in the following style (default AsciiDoctor output)
* <div class="sect1">
* <h2>Page Title</h2>
* <div class="sectionbody">
* <div class="sect2">
* <h3>Sub-Page Title</h3>
* </div>
* <div class="sect2">
* <h3>Sub-Page Title</h3>
* </div>
* </div>
* </div>
* <div class="sect1">
* <h2>Page Title</h2>
* ...
* </div>
*
*/
/*
Additions for issue #342 marked as #342-dierk42
;-)
*/
// some dependencies
import org.jsoup.nodes.Document
import org.jsoup.nodes.Element
import org.jsoup.nodes.TextNode
import org.jsoup.select.Elements
import groovy.transform.Field
import java.nio.file.Path
import java.security.MessageDigest
import static groovy.io.FileType.FILES
import org.docToolchain.atlassian.confluence.clients.ConfluenceClientV1
import org.docToolchain.atlassian.confluence.clients.ConfluenceClientV2
import org.docToolchain.configuration.ConfigService
import org.docToolchain.atlassian.confluence.ConfluenceService
@Field
ConfigService configService = new ConfigService(config)
@Field
ConfluenceService confluenceService = new ConfluenceService(configService)
@Field
def confluenceClient = configService.getConfigProperty("confluence.useV1Api") ?
new ConfluenceClientV1(configService) :
new ConfluenceClientV2(configService)
@Field
def CDATA_PLACEHOLDER_START = '<cdata-placeholder>'
@Field
def CDATA_PLACEHOLDER_END = '</cdata-placeholder>'
@Field
def baseUrl
def allPages
// #938-mksiva: global variable to hold input spaceKey passed in the Config.groovy
def spaceKeyInput
// configuration
def confluenceSpaceKey
def confluenceSubpagesForSections
@Field
def confluencePagePrefix
@Field
def confluencePageSuffix
//def baseApiPath = new URI(config.confluence.api).path
// helper functions
def MD5(String s) {
MessageDigest.getInstance("MD5").digest(s.bytes).encodeHex().toString()
}
def parseAdmonitionBlock(block, String type) {
content = block.select(".content").first()
titleElement = content.select(".title")
titleText = ''
if(titleElement != null) {
titleText = "<ac:parameter ac:name=\"title\">${titleElement.text()}</ac:parameter>"
titleElement.remove()
}
block.after("<ac:structured-macro ac:name=\"${type}\">${titleText}<ac:rich-text-body>${content}</ac:rich-text-body></ac:structured-macro>")
block.remove()
}
/* #342-dierk42
add labels to a Confluence page. Labels are taken from :keywords: which
are converted as meta tags in HTML. Building the array: see below
Confluence allows adding labels only after creation of a page.
Therefore we need extra API calls.
Currently the labels are added one by one. Suggestion for improvement:
Build a label structure of all labels an place them with one call.
Replaces exisiting labels. No harm
Does not check for deleted labels when keywords are deleted from source
document!
*/
def addLabels = { def pageId, def labelsArray ->
// Attach each label in a API call of its own. The only prefix possible
// in our own Confluence is 'global'
labelsArray.each { label ->
label_data = [
prefix : 'global',
name : label
]
confluenceClient.addLabel(pageId, label_data)
println "added label " + label + " to page ID " + pageId
}
}
def uploadAttachment = { def pageId, String url, String fileName, String note ->
def is
def localHash
if (url.startsWith('http')) {
is = new URL(url).openStream()
//build a hash of the attachment
localHash = MD5(new URL(url).openStream().text)
} else {
is = new File(url).newDataInputStream()
//build a hash of the attachment
localHash = MD5(new File(url).newDataInputStream().text)
}
def attachment = confluenceClient.getAttachment(pageId, fileName)
if (attachment.size()>0 && attachment.results.size()>0) {
// attachment exists. need an update?
if (confluenceClient.attachmentHasChanged(attachment, localHash)) {
//hash is different -> attachment needs to be updated
confluenceClient.updateAttachment(pageId, attachment.results[0].id, is, fileName, note, localHash)
println " updated attachment"
}
} else {
confluenceClient.createAttachment(pageId, is, fileName, note, localHash)
}
}
def realTitle(pageTitle){
confluencePagePrefix + pageTitle + confluencePageSuffix
}
def rewriteMarks (body) {
// Confluence strips out mark elements. Replace them with default formatting.
body.select('mark').wrap('<span style="background:#ff0;color:#000"></style>').unwrap()
}
// #352-LuisMuniz: Helper methods
// Fetch all pages of the defined config ancestorsIds. Only keep relevant info in the pages Map
// The map is indexed by lower-case title
def retrieveAllPages = { String spaceKey ->
// #938-mksiva: added a condition spaceKeyInput is null, if it is null, it means that, space key is different, so re fetch all pages.
if (allPages != null && spaceKeyInput == null) {
println "allPages already retrieved"
allPages
} else {
def pageIds = []
def checkSpace = false
int pageLimit = config.confluence.pageLimit ? config.confluence.pageLimit : 100
config.confluence.input.each { input ->
if (!input.ancestorId) {
// if one ancestorId is missing we should scan the whole space
checkSpace = true;
return
}
pageIds.add(input.ancestorId)
}
println (".")
if(checkSpace) {
allPages = confluenceClient.fetchPagesBySpaceKey(spaceKey, pageLimit)
} else {
allPages = confluenceClient.fetchPagesByAncestorId(pageIds, pageLimit)
}
allPages
}
}
// Retrieve a page by id with contents and version
def retrieveFullPage = { String id ->
println("retrieving page with id " + id)
confluenceClient.retrieveFullPageById(id)
}
//if a parent has been specified, check whether a page has the same parent.
boolean hasRequestedParent(Map existingPage, String requestedParentId) {
if (requestedParentId) {
existingPage.parentId == requestedParentId
} else {
true
}
}
def rewriteDescriptionLists(body) {
def TAGS = [ dt: 'th', dd: 'td' ]
body.select('dl').each { dl ->
// WHATWG allows wrapping dt/dd in divs, simply unwrap them
dl.select('div').each { it.unwrap() }
// group dts and dds that belong together, usually it will be a 1:1 relation
// but HTML allows for different constellations
def rows = []
def current = [dt: [], dd: []]
rows << current
dl.select('dt, dd').each { child ->
def tagName = child.tagName()
if (tagName == 'dt' && current.dd.size() > 0) {
// dt follows dd, start a new group
current = [dt: [], dd: []]
rows << current
}
current[tagName] << child.tagName(TAGS[tagName])
child.remove()
}
rows.each { row ->
def sizes = [dt: row.dt.size(), dd: row.dd.size()]
def rowspanIdx = [dt: -1, dd: sizes.dd - 1]
def rowspan = Math.abs(sizes.dt - sizes.dd) + 1
def max = sizes.dt
if (sizes.dt < sizes.dd) {
max = sizes.dd
rowspanIdx = [dt: sizes.dt - 1, dd: -1]
}
(0..<max).each { idx ->
def tr = dl.appendElement('tr')
['dt', 'dd'].each { type ->
if (sizes[type] > idx) {
tr.appendChild(row[type][idx])
if (idx == rowspanIdx[type] && rowspan > 1) {
row[type][idx].attr('rowspan', "${rowspan}")
}
} else if (idx == 0) {
tr.appendElement(TAGS[type]).attr('rowspan', "${rowspan}")
}
}
}
}
dl.wrap('<table></table>')
.unwrap()
}
}
def rewriteInternalLinks (body, anchors, pageAnchors) {
// find internal cross-references and replace them with link macros
body.select('a[href]').each { a ->
def href = a.attr('href')
if (href.startsWith('#')) {
def anchor = href.substring(1)
def pageTitle = anchors[anchor] ?: pageAnchors[anchor]
if (pageTitle && a.text()) {
// as Confluence insists on link texts to be contained
// inside CDATA, we have to strip all HTML and
// potentially loose styling that way.
a.html(a.text())
a.wrap("<ac:link${anchors.containsKey(anchor) ? ' ac:anchor="' + anchor + '"' : ''}></ac:link>")
.before("<ri:page ri:content-title=\"${realTitle pageTitle}\"/>")
.wrap("<ac:plain-text-link-body>${CDATA_PLACEHOLDER_START}${CDATA_PLACEHOLDER_END}</ac:plain-text-link-body>")
.unwrap()
}
}
}
}
def rewriteJiraLinks = { body ->
// find links to jira tickets and replace them with jira macros
body.select('a[href]').each { a ->
def href = a.attr('href')
if (href.startsWith(config.jira.api + "/browse/")) {
def ticketId = a.text()
a.before("""<ac:structured-macro ac:name=\"jira\" ac:schema-version=\"1\">
<ac:parameter ac:name=\"key\">${ticketId}</ac:parameter>
<ac:parameter ac:name=\"serverId\">${config.confluence.jiraServerId}</ac:parameter>
</ac:structured-macro>""")
a.remove()
}
}
}
def rewriteOpenAPI (org.jsoup.nodes.Element body) {
if (config.confluence.useOpenapiMacro == true || config.confluence.useOpenapiMacro == 'confluence-open-api') {
body.select('div.openapi pre > code').each { code ->
def parent=code.parent()
def rawYaml=code.wholeText()
code.parent()
.wrap('<ac:structured-macro ac:name="confluence-open-api" ac:schema-version="1" ac:macro-id="1dfde21b-6111-4535-928a-470fa8ae3e7d"></ac:structured-macro>')
.unwrap()
code.wrap("<ac:plain-text-body>${CDATA_PLACEHOLDER_START}${CDATA_PLACEHOLDER_END}</ac:plain-text-body>")
.replaceWith(new TextNode(rawYaml))
}
} else if (config.confluence.useOpenapiMacro == 'swagger-open-api') {
body.select('div.openapi pre > code').each { code ->
def parent=code.parent()
def rawYaml=code.wholeText()
code.parent()
.wrap('<ac:structured-macro ac:name="swagger-open-api" ac:schema-version="1" ac:macro-id="f9deda8a-1375-4488-8ca5-3e10e2e4ee70"></ac:structured-macro>')
.unwrap()
code.wrap("<ac:plain-text-body>${CDATA_PLACEHOLDER_START}${CDATA_PLACEHOLDER_END}</ac:plain-text-body>")
.replaceWith(new TextNode(rawYaml))
}
} else if (config.confluence.useOpenapiMacro == 'open-api') {
def includeURL=null
for (Element e : body.select('div .listingblock.openapi')) {
for (String s : e.className().split(" ")) {
if (s.startsWith("url")) {
//include the link to the URL for the macro
includeURL = s.replace('url:', '')
}
}
}
body.select('div.openapi pre > code').each { code ->
def parent=code.parent()
def rawYaml=code.wholeText()
code.parent()
.wrap('<ac:structured-macro ac:name="open-api" ac:schema-version="1" data-layout="default" ac:macro-id="4302c9d8-fca4-4f14-99a9-9885128870fa"></ac:structured-macro>')
.unwrap()
if (includeURL!=null)
{
code.before('<ac:parameter ac:name="url">'+includeURL+'</ac:parameter>')
}
else {
//default: show download button
code.before('<ac:parameter ac:name="showDownloadButton">true</ac:parameter>')
code.wrap("<ac:plain-text-body>${CDATA_PLACEHOLDER_START}${CDATA_PLACEHOLDER_END}</ac:plain-text-body>")
.replaceWith(new TextNode(rawYaml))
}
}
}
}
def getEmbeddedImageData(src){
def imageData = src.split("[;:,]")
def fileExtension = imageData[1].split("/")[1]
// treat svg+xml as svg to be able to create a file from the embedded image
// more MIME types: https://www.iana.org/assignments/media-types/media-types.xhtml#image
if(fileExtension == "svg+xml"){
fileExtension = "svg"
}
return Map.of(
"fileExtension", fileExtension,
"encoding", imageData[2],
"encodedContent", imageData[3]
)
}
def handleEmbeddedImage(basePath, fileName, fileExtension, encodedContent) {
def imageDir = "images/"
if(config.imageDirs.size() > 0){
def dir = config.imageDirs.find { it ->
def configureImagesDir = it.replace('./', '/')
Path.of(basePath, configureImagesDir, fileName).toFile().exists()
}
if(dir != null){
imageDir = dir.replace('./', '/')
}
}
if(!Path.of(basePath, imageDir, fileName).toFile().exists()){
println "Could not find embedded image at a known location"
def embeddedImagesLocation = "/confluence/images/"
new File(basePath + embeddedImagesLocation).mkdirs()
def imageHash = MD5(encodedContent)
println "Embedded Image Hash " + imageHash
def image = new File(basePath + embeddedImagesLocation + imageHash + ".${fileExtension}")
if(!image.exists()){
println "Creating image at " + basePath + embeddedImagesLocation
image.withOutputStream {output ->
output.write(encodedContent.decodeBase64())}
}
fileName = imageHash + ".${fileExtension}"
return Map.of(
"filePath", image.canonicalPath,
"fileName", fileName
)
} else {
return Map.of(
"filePath", basePath + imageDir + fileName,
"fileName", fileName
)
}
}
//modify local page in order to match the internal confluence storage representation a bit better
//definition lists are not displayed by confluence, so turn them into tables
//body can be of type Element or Elements
def parseBody(body, anchors, pageAnchors) {
def uploads = []
rewriteOpenAPI body
body.select('div.paragraph').unwrap()
body.select('div.ulist').unwrap()
//body.select('div.sect3').unwrap()
[ 'note':'info',
'warning':'warning',
'important':'warning',
'caution':'note',
'tip':'tip' ].each { adType, cType ->
body.select('.admonitionblock.'+adType).each { block ->
parseAdmonitionBlock(block, cType)
}
}
//special for the arc42-template
body.select('div.arc42help').select('.content')
.wrap('<ac:structured-macro ac:name="expand"></ac:structured-macro>')
.wrap('<ac:rich-text-body></ac:rich-text-body>')
.wrap('<ac:structured-macro ac:name="info"></ac:structured-macro>')
.before('<ac:parameter ac:name="title">arc42</ac:parameter>')
.wrap('<ac:rich-text-body><p></p></ac:rich-text-body>')
body.select('div.arc42help').unwrap()
body.select('div.title').wrap("<strong></strong>").before("<br />").wrap("<div></div>")
body.select('div.listingblock').wrap("<p></p>").unwrap()
// see if we can find referenced images and fetch them
new File("tmp/images/.").mkdirs()
// find images, extract their URLs for later uploading (after we know the pageId) and replace them with this macro:
// <ac:image ac:align="center" ac:width="500">
// <ri:attachment ri:filename="deployment-context.png"/>
// </ac:image>
body.select('img').each { img ->
def src = img.attr('src')
def imgWidth = img.attr('width')?:500
def imgAlign = img.attr('align')?:"center"
//it is not an online image, so upload it to confluence and use the ri:attachment tag
if(!src.startsWith("http")) {
def sanitizedBaseUrl = baseUrl.toString().replaceAll('\\\\','/').replaceAll('/[^/]*$','/')
def newUrl
def fileName
//it is an embedded image
if(src.startsWith("data:image")){
def imageData = getEmbeddedImageData(src)
def fileExtension = imageData.get("fileExtension")
def encodedContent = imageData.get("encodedContent")
fileName = img.attr('alt').replaceAll(/\s+/,"_").concat(".${fileExtension}")
def embeddedImage = handleEmbeddedImage(sanitizedBaseUrl, fileName, fileExtension, encodedContent)
newUrl = embeddedImage.get("filePath")
fileName = embeddedImage.get("fileName")
}else {
newUrl = sanitizedBaseUrl + src
fileName = java.net.URLDecoder.decode((src.tokenize('/')[-1]),"UTF-8")
}
newUrl = java.net.URLDecoder.decode(newUrl,"UTF-8")
println " image: "+newUrl
uploads << [0,newUrl,fileName,"automatically uploaded"]
img.after("<ac:image ac:align=\"${imgAlign}\" ac:width=\"${imgWidth}\"><ri:attachment ri:filename=\"${fileName}\"/></ac:image>")
}
// it is an online image, so we have to use the ri:url tag
else {
img.after("<ac:image ac:align=\"imgAlign\" ac:width=\"${imgWidth}\"><ri:url ri:value=\"${src}\"/></ac:image>")
}
img.remove()
}
if(config.confluence.enableAttachments){
attachmentPrefix = config.confluence.attachmentPrefix ? config.confluence.attachmentPrefix : 'attachment'
body.select('a').each { link ->
def src = link.attr('href')
println " attachment src: "+src
//upload it to confluence and use the ri:attachment tag
if(src.startsWith(attachmentPrefix)) {
def newUrl = baseUrl.toString().replaceAll('\\\\','/').replaceAll('/[^/]*$','/')+src
def fileName = java.net.URLDecoder.decode((src.tokenize('/')[-1]),"UTF-8")
newUrl = java.net.URLDecoder.decode(newUrl,"UTF-8")
uploads << [0,newUrl,fileName,"automatically uploaded non-image attachment by docToolchain"]
def uriArray=fileName.split("/")
def pureFilename = uriArray[uriArray.length-1]
def innerhtml = link.html()
link.after("<ac:structured-macro ac:name=\"view-file\" ac:schema-version=\"1\"><ac:parameter ac:name=\"name\"><ri:attachment ri:filename=\"${pureFilename}\"/></ac:parameter></ac:structured-macro>")
link.after("<ac:link><ri:attachment ri:filename=\"${pureFilename}\"/><ac:plain-text-link-body> <![CDATA[\"${innerhtml}\"]]></ac:plain-text-link-body></ac:link>")
link.remove()
}
}
}
if(config.confluence.jiraServerId){
rewriteJiraLinks body
}
rewriteMarks body
rewriteDescriptionLists body
rewriteInternalLinks body, anchors, pageAnchors
//not really sure if must check here the type
String bodyString = body
if(body instanceof Element){
bodyString = body.html()
}
Element saneHtml = new Document("").outputSettings(new Document.OutputSettings().prettyPrint(false)).html(bodyString)
def pageString = new HtmlTransformer().transformToConfluenceFormat(saneHtml)
return Map.of(
"page", pageString,
"uploads", uploads
)
}
def generateAndAttachToC(localPage) {
def content
if(config.confluence.disableToC){
def prefix = (config.confluence.extraPageContent?:'')
content = prefix+localPage
}else{
def default_toc = '<p><ac:structured-macro ac:name="toc"/></p>'
def prefix = (config.confluence.tableOfContents?:default_toc)+(config.confluence.extraPageContent?:'')
content = prefix+localPage
def default_children = '<p><ac:structured-macro ac:name="children"><ac:parameter ac:name="sort">creation</ac:parameter></ac:structured-macro></p>'
content += (config.confluence.tableOfChildren?:default_children)
}
def localHash = MD5(localPage)
content += '<ac:placeholder>hash: #'+localHash+'#</ac:placeholder>'
return content
}
// the create-or-update functionality for confluence pages
// #342-dierk42: added parameter 'keywords'
def pushToConfluence = { pageTitle, pageBody, parentId, anchors, pageAnchors, keywords ->
parentId = parentId?.toString()
def deferredUpload = []
String realTitleLC = realTitle(pageTitle).toLowerCase()
String realTitle = realTitle(pageTitle)
//try to get an existing page
def parsedBody = parseBody(pageBody, anchors, pageAnchors)
localPage = parsedBody.get("page")
deferredUpload.addAll(parsedBody.get("uploads"))
def localHash = MD5(localPage)
localPage = generateAndAttachToC(localPage)
// #938-mksiva: Changed the 3rd parameter from 'config.confluence.spaceKey' to 'confluenceSpaceKey' as it was always taking the default spaceKey
// instead of the one passed in the input for each row.
def pages = retrieveAllPages(confluenceSpaceKey)
println("pages retrieved")
// println "Suche nach vorhandener Seite: " + pageTitle
Map existingPage = pages[realTitleLC]
def page
if (existingPage) {
if (hasRequestedParent(existingPage, parentId)) {
page = retrieveFullPage(existingPage.id as String)
} else {
page = null
}
} else {
page = null
}
// println "Gefunden: " + page.id + " Titel: " + page.title
if (page) {
println "found existing page: " + page.id +" version "+page.version.number
//extract hash from remote page to see if it is different from local one
def remotePage = page.body.storage.value.toString().trim()
def remoteHash = remotePage =~ /(?ms)hash: #([^#]+)#/
remoteHash = remoteHash.size()==0?"":remoteHash[0][1]
// println "remoteHash: " + remoteHash
// println "localHash: " + localHash
if (remoteHash == localHash) {
println "page hasn't changed!"
deferredUpload.each {
uploadAttachment(page?.id, it[1], it[2], it[3])
}
deferredUpload = []
// #324-dierk42: Add keywords as labels to page.
if (keywords) {
addLabels(page.id, keywords)
}
return page.id
} else {
def newPageVersion = (page.version.number as Integer) + 1
confluenceClient.updatePage(
page.id,
realTitle,
confluenceSpaceKey,
localPage,
newPageVersion,
config.confluence.pageVersionComment ?: '',
parentId
)
println "> updated page "+page.id
deferredUpload.each {
uploadAttachment(page.id, it[1], it[2], it[3])
}
deferredUpload = []
// #324-dierk42: Add keywords as labels to page.
if (keywords) {
addLabels(page.id, keywords)
}
return page.id
}
} else {
//#352-LuisMuniz if the existing page's parent does not match the requested parentId, fail
if (existingPage && !hasRequestedParent(existingPage, parentId)) {
throw new IllegalArgumentException("Cannot create page, page with the same "
+ "title=${existingPage.title} "
+ "with id=${existingPage.id} already exists in the space. "
+ "A Confluence page title must be unique within a space, consider specifying a 'confluencePagePrefix' in ConfluenceConfig.groovy")
}
//create a page
page = confluenceClient.createPage(
realTitle,
confluenceSpaceKey,
localPage,
config.confluence.pageVersionComment ?: '',
parentId
)
println "> created page "+page?.id
deferredUpload.each {
uploadAttachment(page?.id, it[1], it[2], it[3])
}
deferredUpload = []
// #324-dierk42: Add keywords as labels to page.
if (keywords) {
addLabels(page?.id, keywords)
}
return page?.id
}
}
def parseAnchors(page) {
def anchors = [:]
page.body.select('[id]').each { anchor ->
def name = anchor.attr('id')
anchors[name] = page.title
anchor.before("<ac:structured-macro ac:name=\"anchor\"><ac:parameter ac:name=\"\">${name}</ac:parameter></ac:structured-macro>")
}
anchors
}
def pushPages
pushPages = { pages, anchors, pageAnchors, labels ->
pages.each { page ->
page.title = page.title.trim()
println page.title
def id = pushToConfluence page.title, page.body, page.parent, anchors, pageAnchors, labels
page.children*.parent = id
// println "Push children von id " + id
pushPages page.children, anchors, pageAnchors, labels
// println "Ende Push children von id " + id
}
}
def recordPageAnchor(head) {
def a = [:]
if (head.attr('id')) {
a[head.attr('id')] = head.text()
}
a
}
def promoteHeaders(tree, start, offset) {
(start..7).each { i ->
tree.select("h${i}").tagName("h${i-offset}").before('<br />')
}
}
def retrievePageIdByName = { String name ->
confluenceClient.retrievePageIdByName(name, confluenceSpaceKey)
}
def getPagesRecursive(Element element, String parentId, Map anchors, Map pageAnchors, int level, int maxLevel) {
def pages = []
element.select("div.sect${level}").each { sect ->
def title = sect.select("h${level + 1}").text()
pageAnchors.putAll(recordPageAnchor(sect.select("h${level + 1}")))
Elements pageBody
if (level == 1) {
pageBody = sect.select('div.sectionbody')
} else {
pageBody = new Elements(sect)
pageBody.select("h${level + 1}").remove()
}
def currentPage = [
title: title,
body: pageBody,
children: [],
parent: parentId
]
if (maxLevel > level) {
currentPage.children.addAll(getPagesRecursive(sect, null, anchors, pageAnchors, level + 1, maxLevel))
pageBody.select("div.sect${level + 1}").remove()
} else {
pageBody.select("div.sect${level + 1}").unwrap()
}
promoteHeaders sect, level + 2, level + 1
pages << currentPage
anchors.putAll(parseAnchors(currentPage))
}
return pages
}
def getPages(Document dom, String parentId, int maxLevel) {
def anchors = [:]
def pageAnchors = [:]
def sections = pages = []
def title = dom.select('h1').text()
if (maxLevel <= 0) {
dom.select('div#content').each { pageBody ->
pageBody.select('div.sect2').unwrap()
promoteHeaders pageBody, 2, 1
def page = [title : title,
body : pageBody,
children: [],
parent : parentId]
pages << page
sections = page.children
parentId = null
anchors.putAll(parseAnchors(page))
}
} else {
// let's try to select the "first page" and push it to confluence
dom.select('div#preamble div.sectionbody').each { pageBody ->
pageBody.select('div.sect2').unwrap()
def preamble = [
title: title,
body: pageBody,
children: [],
parent: parentId
]
pages << preamble
sections = preamble.children
parentId = null
anchors.putAll(parseAnchors(preamble))
}
sections.addAll(getPagesRecursive(dom, parentId, anchors, pageAnchors, 1, maxLevel))
}
return [pages, anchors, pageAnchors]
}
if(config.confluence.inputHtmlFolder) {
htmlFolder = "${docDir}/${config.confluence.inputHtmlFolder}"
println "Starting processing files in folder: " + config.confluence.inputHtmlFolder
def dir = new File(htmlFolder)
dir.eachFileRecurse (FILES) { fileName ->
if (fileName.isFile()){
def map = [file: config.confluence.inputHtmlFolder+fileName.getName()]
config.confluence.input.add(map)
}
}
}
config.confluence.input.each { input ->
// TODO check why this is necessary
if(input.file) {
input.file = confluenceService.checkAndBuildCanonicalFileName(input.file)
// assignend, but never used in pushToConfluence(...) (fixed here)
// #938-mksiva: assign spaceKey passed for each file in the input
spaceKeyInput = input.spaceKey
confluenceSpaceKey = input.spaceKey ?: config.confluence.spaceKey
confluenceCreateSubpages = (input.createSubpages != null) ? input.createSubpages : config.confluence.createSubpages
confluenceAllInOnePage = (input.allInOnePage != null) ? input.allInOnePage : config.confluence.allInOnePage
if (!(confluenceCreateSubpages instanceof ConfigObject && confluenceAllInOnePage instanceof ConfigObject)) {
println "ERROR:"
println "Deprecated configuration, migrate as follows:"
println "allInOnePage = true -> subpagesForSections = 0"
println "allInOnePage = false && createSubpages = false -> subpagesForSections = 1"
println "allInOnePage = false && createSubpages = true -> subpagesForSections = 2"
throw new RuntimeException("config problem")
}
confluenceSubpagesForSections = (input.subpagesForSections != null) ? input.subpagesForSections : config.confluence.subpagesForSections
if (confluenceSubpagesForSections instanceof ConfigObject) {
confluenceSubpagesForSections = 1
}
// hard to read in case of using :sectnums: -> so we add a suffix
confluencePagePrefix = input.pagePrefix ?: config.confluence.pagePrefix
// added
confluencePageSuffix = input.pageSuffix ?: config.confluence.pageSuffix
confluencePreambleTitle = input.preambleTitle ?: config.confluence.preambleTitle
if (!(confluencePreambleTitle instanceof ConfigObject)) {
println "ERROR:"
println "Deprecated configuration, use first level heading in document instead of preambleTitle configuration"
throw new RuntimeException("config problem")
}
File htmlFile = new File(input.file)
baseUrl = htmlFile
Document dom = confluenceService.parseFile(htmlFile)
// if ancestorName is defined try to find machingAncestorId in confluence
def retrievedAncestorId
if (input.ancestorName) {
// Retrieve a page id by name
retrievedAncestorId = retrievePageIdByName(input.ancestorName)
println("Retrieved pageId for given ancestorName '${input.ancestorName}' is ${retrievedAncestorId}")
}
// if input does not contain an ancestorName, check if there is ancestorId, otherwise check if there is a global one
def parentId = retrievedAncestorId ?: input.ancestorId ?: config.confluence.ancestorId
// if parentId is still not set, create a new parent page (parentId = null)
parentId = parentId ?: null
//println("ancestorName: '${input.ancestorName}', ancestorId: ${input.ancestorId} ---> final parentId: ${parentId}")
// #342-dierk42: get the keywords from the meta tags
def keywords = confluenceService.getKeywords(dom)
def (pages, anchors, pageAnchors) = getPages(dom, parentId, confluenceSubpagesForSections)
pushPages pages, anchors, pageAnchors, keywords
if (parentId) {
println "published to ${config.confluence.api - "rest/api/"}spaces/${confluenceSpaceKey}/pages/${parentId}"
} else {
println "published to ${config.confluence.api - "rest/api/"}spaces/${confluenceSpaceKey}"
}
}
}
""
3.12. convertToDocx
1 minute to read
At a Glance
Before You Begin
Before using this task:
-
Install pandoc.
-
Ensure that 'docbook' and 'docx' are added to the inputFiles formats in Config.groovy.
-
As an optional step, specify a reference doc file with custom stylesheets (see task
createReferenceDoc
).
Further Reading and Resources
Read the Render AsciiDoc to docx (MS Word) blog post.
Source
task convertToDocx (
group: 'docToolchain',
description: 'converts file to .docx via pandoc. Needs pandoc installed.',
type: Exec
) {
// All files with option `docx` in config.groovy is converted to docbook and then to docx.
def sourceFilesDocx = sourceFiles.findAll { 'docx' in it.formats }
def explicitSourceFilesCount = sourceFilesDocx.size()
if(explicitSourceFilesCount==0){
sourceFilesDocx = sourceFiles.findAll { 'docbook' in it.formats }
}
sourceFilesDocx.each {
def sourceFile = it.file.replace('.adoc', '.xml')
def targetFile = sourceFile.replace('.xml', '.docx')
new File("$targetDir/docx/$targetFile")
.getParentFile()
.getAbsoluteFile().mkdirs()
workingDir "$targetDir/docbook"
executable = "pandoc"
if(referenceDocFile?.trim()) {
args = ["-r","docbook",
"-t","docx",
"-o","../docx/$targetFile",
"--reference-doc=${docDir}/${referenceDocFile}",
sourceFile]
} else {
args = ["-r","docbook",
"-t","docx",
"-o","./../docx/$targetFile",
sourceFile]
}
}
doFirst {
if(sourceFilesDocx.size()==0){
throw new Exception ("""
>> No source files defined for type 'docx'.
>> Please specify at least one inputFile in your docToolchainConfig.groovy
""")
}
if(explicitSourceFilesCount==0) {
logger.warn('WARNING: No source files defined for type "docx". Converting with best effort')
}
}
}
3.13. createReferenceDoc
About This Task
This task creates a reference docx file used by pandoc during docbook-to-docx conversion.
Use task convertToDocx
to edit this file so it uses your preferred styles.
The contents of the reference docx are ignored, but its stylesheets and document properties (including margins, page size, header and footer) are used in the new docx. For more information, see Pandoc User’s Guide: Options affecting specific writers (--reference-doc) And if you have problems with changing the default table style: see https://github.com/jgm/pandoc/issues/3275. |
Config.groovy Notes
The 'referenceDocFile' property must be set to your custom reference file in Config.groovy:
inputPath = '.'
// use a style reference file in the input path for conversion from docbook to docx
referenceDocFile = "${inputPath}/my-ref-file.docx"
Source
task createReferenceDoc (
group: 'docToolchain helper',
description: 'creates a docx file to be used as a format style reference in task convertToDocx. Needs pandoc installed.',
type: Exec
) {
workingDir "$docDir"
executable = "pandoc"
args = ["-o", "${docDir}/${referenceDocFile}",
"--print-default-data-file",
"reference.docx"]
doFirst {
if(!(referenceDocFile?.trim())) {
throw new GradleException("Option `referenceDocFile` is not defined in config.groovy or has an empty value.")
}
}
}
3.14. convertToEpub
1 minute to read
At a Glance
Dependency
About This Task
This task uses pandoc to convert the DocBook output from AsciiDoctor to ePub.
This publishes the output as an eBook which can be read using any eBook reader.
The resulting file can be found in build/docs/epub
.
Further Reading and Resources
Turn your Document into an Audio-Book blog post.
Source
task convertToEpub (
group: 'docToolchain',
description: 'converts file to .epub via pandoc. Needs pandoc installed.',
type: Exec
) {
// All files with option `epub` in config.groovy is converted to docbook and then to epub.
def sourceFilesEpub = sourceFiles.findAll { 'epub' in it.formats }
def explicitSourceFilesCount = sourceFilesEpub.size()
if(explicitSourceFilesCount==0){
sourceFilesEpub = sourceFiles.findAll { 'docbook' in it.formats }
}
sourceFilesEpub.each {
def sourceFile = it.file.replace('.adoc', '.xml')
def targetFile = sourceFile.replace('.xml', '.epub')
new File("$targetDir/epub/$targetFile")
.getParentFile()
.getAbsoluteFile().mkdirs()
workingDir "$targetDir/docbook"
executable = "pandoc"
args = ['-r','docbook',
'-t','epub',
'-o',"../epub/$targetFile",
sourceFile]
}
doFirst {
if(sourceFilesEpub.size()==0){
throw new Exception ("""
>> No source files defined for type 'epub'.
>> Please specify at least one inputFile in your docToolchainConfig.groovy
""")
}
if(explicitSourceFilesCount==0) {
logger.warn('WARNING: No source files defined for type "epub". Converting with best effort')
}
}
}
3.15. exportEA
4 minutes to read
At a Glance
About This Task
By default, no special configuration is necessary. However, several optional parameter configurations are available to support a project and packages to be used for export. These parameters can be used independently from one another. A sample of how to edit your projects' Config.groovy is provided in the 'Config.groovy' of the docToolchain project itself.
Important
Currently this feature is WINDOWS-only. See this related issue.
The Optional Parameter Configurations
connection
Either set the connection to a certain project, or comment it out to use all project files inside the src folder or its child folder.
packageFilter
Add one or multiple packageGUIDs to be used for export. All packages are analysed, if no packageFilter is set.
exportPath
Relative path to base 'docDir' to which the diagrams and notes are to be exported. Default: "src/docs". Example: docDir = 'D:\work\mydoc\' ; exportPath = 'src/pdocs' ; Images will be exported to 'D:\work\mydoc\src\pdocs\images\ea', Notes will be exported to 'D:\work\mydoc\src\pdocs\ea',
searchPath
Relative path to base 'docDir', in which Enterprise Architect project files are searched Default: "src/docs". Example: docDir = 'D:\work\mydoc\' ; exportPath = 'src/projects' ; Lookup for eap and eapx files starts in 'D:\work\mydoc\src\projects' and goes down the folder structure. Note: In case parameter 'connection' is already defined, the searchPath value is also used. exportEA starts opening the database parameter 'connection' first then looks for further project files either in the searchPath (if set) or in the docDir folder of the project.
glossaryAsciiDocFormat
Whether or not the EA project glossary is exported depends on this parameter. If not set or an empty string, no glossary is exported. The glossaryAsciiDocFormat string is used to format each glossary entry in a certain AsciiDoc format.
The following placeholders are defined for the format string: ID, TERM, MEANING, TYPE. One or more can be used by the output format. For example:
A valid output format is to include the glossary as a flat list. The file can be included where needed in the documentation.
glossaryAsciiDocFormat = "TERM:: MEANING"
Other format strings can be used to include it as a table row. The glossary terms are sorted in alphabetical order.
glossaryTypes
This parameter is used in case a glossaryAsciiDocFormat is defined, otherwise it is not evaluated. It’s used to filter for certain types. If the glossaryTypes list is empty, all entries will be used. For example:
glossaryTypes = ["Business", "Technical"]
diagramAttributes
If set, the string is used to create and store diagram attributes to be included in the document alongside a diagram. These placeholders are defined and populated with the diagram attributes, if used in the diagramAttributes string:
%DIAGRAM_AUTHOR%
,
%DIAGRAM_CREATED%
,
%DIAGRAM_GUID%
,
%DIAGRAM_MODIFIED%
,
%DIAGRAM_NAME%
,
%DIAGRAM_NOTES%
,
%DIAGRAM_DIAGRAM_TYPE%
,
%DIAGRAM_VERSION%
,
%NEWLINE%
Example: diagramAttributes = "Last modification: %DIAGRAM_MODIFIED%%NEWLINE%Version: %DIAGRAM_VERSION%"
You can add the string %NEWLINE% where a line break will be added. The resulting text is stored next to the diagram image using the same path and file name, but a different file extension (.ad). This can be included in the document if required. If diagramAttributes is not set or an empty string, no file is written.
additionalOptions
This parameter is used to define the specific behavior of the export. Currently these options are supported:
KeepFirstDiagram
If diagrams are not uniquely named, the last diagram will be saved. If you want to prevent diagrams from being overwritten, add this parameter to additionalOptions.
Glossary export
By setting the glossaryAsciiDocFormat, the glossary terms stored in the EA project will be exported into a folder named 'glossary' below the configured exportPath. In case multiple EA projects are found for export, one glossary per project is exported - each named using the project’s GUID plus extension '.ad'.
Each individual file will be filtered (see glossaryTypes) and sorted in alphabetical order. In addition, a global glossary is created by using all single glossary files. This global file is named 'glossary.ad' and is also placed in the glossary folder. The global glossary is also filtered and sorted. If there is only one EA project, only the global glossary is written.
Further Reading and Resources
-
JIRA to Sparx EA blog post.
-
Did you Ever Wish you Had Better Diagrams? blog post.
Source
task exportEA(
dependsOn: [streamingExecute],
description: 'exports all diagrams and some texts from EA files',
group: 'docToolchain'
) {
doFirst {
}
doLast {
logger.info("docToolchain > exportEA: " + docDir)
logger.info("docToolchain > exportEA: " + mainConfigFile)
def configFile = new File(docDir, mainConfigFile)
def config = new ConfigSlurper().parse(configFile.text)
def scriptParameterString = ""
def exportPath = ""
def searchPath = ""
def glossaryPath = ""
def readme = """This folder contains exported diagrams or notes from Enterprise Architect.
Please note that these are generated files but reside in the `src`-folder in order to be versioned.
This is to make sure that they can be used from environments other than windows.
# Warning!
**The contents of this folder will be overwritten with each re-export!**
use `gradle exportEA` to re-export files
"""
if (!config.exportEA.connection.isEmpty()) {
logger.info("docToolchain > exportEA: found " + config.exportEA.connection)
scriptParameterString = scriptParameterString + "-c \"${config.exportEA.connection}\""
}
if (!config.exportEA.packageFilter.isEmpty()) {
def packageFilterToCreate = config.exportEA.packageFilter as List
logger.info("docToolchain > exportEA: package filter list size: " + packageFilterToCreate.size())
packageFilterToCreate.each { packageFilter ->
scriptParameterString = scriptParameterString + " -p \"${packageFilter}\""
}
}
if (!config.exportEA.exportPath.isEmpty()) {
exportPath = new File(docDir, config.exportEA.exportPath).getAbsolutePath()
} else {
exportPath = new File(docDir, 'src/docs').getAbsolutePath()
}
if (!config.exportEA.searchPath.isEmpty()) {
searchPath = new File(docDir, config.exportEA.searchPath).getAbsolutePath()
}
else if (!config.exportEA.absoluteSearchPath.isEmpty()) {
searchPath = new File(config.exportEA.absoluteSearchPath).getAbsolutePath()
}
else {
searchPath = new File(docDir, 'src').getAbsolutePath()
}
scriptParameterString = scriptParameterString + " -d \"$exportPath\""
scriptParameterString = scriptParameterString + " -s \"$searchPath\""
logger.info("docToolchain > exportEA: exportPath: " + exportPath)
//remove old glossary files/folder if exist
new File(exportPath, 'glossary').deleteDir()
//set the glossary file path in case an output format is configured, other no glossary is written
if (!config.exportEA.glossaryAsciiDocFormat.isEmpty()) {
//create folder to store glossaries
new File(exportPath, 'glossary/.').mkdirs()
glossaryPath = new File(exportPath, 'glossary').getAbsolutePath()
scriptParameterString = scriptParameterString + " -g \"$glossaryPath\""
}
//configure additional diagram attributes to be exported
if (!config.exportEA.diagramAttributes.isEmpty()) {
scriptParameterString = scriptParameterString + " -da \"$config.exportEA.diagramAttributes\""
}
//configure additional diagram attributes to be exported
if (!config.exportEA.additionalOptions.isEmpty()) {
scriptParameterString = scriptParameterString + " -ao \"$config.exportEA.additionalOptions\""
}
//make sure path for notes exists
//and remove old notes
new File(exportPath, 'ea').deleteDir()
//also remove old diagrams
new File(exportPath, 'images/ea').deleteDir()
//create a readme to clarify things
new File(exportPath, 'images/ea/.').mkdirs()
new File(exportPath, 'images/ea/readme.ad').write(readme)
new File(exportPath, 'ea/.').mkdirs()
new File(exportPath, 'ea/readme.ad').write(readme)
//execute through cscript in order to make sure that we get WScript.echo right
logger.info("docToolchain > exportEA: parameters: " + scriptParameterString)
"%SystemRoot%\\System32\\cscript.exe //nologo ${projectDir}/scripts/exportEAP.vbs ${scriptParameterString}".executeCmd()
//the VB Script is only capable of writing iso-8859-1-Files.
//we now have to convert them to UTF-8
new File(exportPath, 'ea/.').eachFileRecurse { file ->
if (file.isFile()) {
println "exported notes " + file.canonicalPath
file.write(file.getText('iso-8859-1'), 'utf-8')
}
}
//sort, filter and reformat a glossary if an output format is configured
if (!config.exportEA.glossaryAsciiDocFormat.isEmpty()) {
def glossaryTypes
if (!config.exportEA.glossaryTypes.isEmpty()) {
glossaryTypes = config.exportEA.glossaryTypes as List
}
new GlossaryHandler().execute(glossaryPath, config.exportEA.glossaryAsciiDocFormat, glossaryTypes);
}
}
}
' based on the "Project Interface Example" which comes with EA
' http://stackoverflow.com/questions/1441479/automated-method-to-export-enterprise-architect-diagrams
Dim EAapp 'As EA.App
Dim Repository 'As EA.Repository
Dim FS 'As Scripting.FileSystemObject
Dim projectInterface 'As EA.Project
Const ForAppending = 8
Const ForWriting = 2
' Helper
' http://windowsitpro.com/windows/jsi-tip-10441-how-can-vbscript-create-multiple-folders-path-mkdir-command
Function MakeDir (strPath)
Dim strParentPath, objFSO
Set objFSO = CreateObject("Scripting.FileSystemObject")
On Error Resume Next
strParentPath = objFSO.GetParentFolderName(strPath)
If Not objFSO.FolderExists(strParentPath) Then MakeDir strParentPath
If Not objFSO.FolderExists(strPath) Then objFSO.CreateFolder strPath
On Error Goto 0
MakeDir = objFSO.FolderExists(strPath)
End Function
' Replaces certain characters with '_' to avoid unwanted file or folder names causing errors or structure failures.
' Regular expression can easily be extended with further characters to be replaced.
Function NormalizeName(theName)
dim re : Set re = new regexp
re.Pattern = "[\\/\[\]\s]"
re.Global = True
NormalizeName = re.Replace(theName, "_")
End Function
Sub WriteNote(currentModel, currentElement, notes, prefix)
If (Left(notes, 6) = "{adoc:") Then
strFileName = Trim(Mid(notes,7,InStr(notes,"}")-7))
strNotes = Right(notes,Len(notes)-InStr(notes,"}"))
set objFSO = CreateObject("Scripting.FileSystemObject")
If (currentModel.Name="Model") Then
' When we work with the default model, we don't need a sub directory
path = objFSO.BuildPath(exportDestination,"ea/")
Else
path = objFSO.BuildPath(exportDestination,"ea/"&NormalizeName(currentModel.Name)&"/")
End If
MakeDir(path)
post = ""
If (prefix<>"") Then
post = "_"
End If
MakeDir(path&prefix&post)
set objFile = objFSO.OpenTextFile(path&prefix&post&"/"&strFileName&".ad",ForAppending, True)
name = currentElement.Name
name = Replace(name,vbCr,"")
name = Replace(name,vbLf,"")
strCombinedNotes = "_all_notes.ad"
set objCombinedNotesFile = objFSO.OpenTextFile(path&prefix&post&"/"&strCombinedNotes,ForAppending, True)
if (Left(strNotes, 3) = vbCRLF&"|") Then
' content should be rendered as table - so don't interfere with it
objFile.WriteLine(vbCRLF)
objCombinedNotesFile.WriteLine(vbCRLF)
else
'let's add the name of the object
objFile.WriteLine(vbCRLF&vbCRLF&"."&name)
objCombinedNotesFile.WriteLine(vbCRLF&vbCRLF&"."&name)
End If
objFile.WriteLine(vbCRLF&strNotes)
objFile.Close
objCombinedNotesFile.WriteLine(vbCRLF&strNotes)
objCombinedNotesFile.Close
if (prefix<>"") Then
' write the same to a second file
set objFile = objFSO.OpenTextFile(path&prefix&".ad",ForAppending, True)
objFile.WriteLine(vbCRLF&vbCRLF&"."&name&vbCRLF&strNotes)
objFile.Close
End If
End If
End Sub
Sub SyncJira(currentModel, currentDiagram)
notes = currentDiagram.notes
set currentPackage = Repository.GetPackageByID(currentDiagram.PackageID)
updated = 0
created = 0
If (Left(notes, 6) = "{jira:") Then
WScript.echo " >>>> Diagram jira tag found"
strSearch = Mid(notes,7,InStr(notes,"}")-7)
Set objShell = CreateObject("WScript.Shell")
'objShell.CurrentDirectory = fso.GetFolder("./scripts")
Set objExecObject = objShell.Exec ("cmd /K groovy ./scripts/exportEAPJiraPrintHelper.groovy """ & strSearch &""" & exit")
strReturn = ""
x = 0
y = 0
Do While Not objExecObject.StdOut.AtEndOfStream
output = objExecObject.StdOut.ReadLine()
' WScript.echo output
jiraElement = Split(output,"|")
name = jiraElement(0)&":"&vbCR&vbLF&jiraElement(4)
On Error Resume Next
Set requirement = currentPackage.Elements.GetByName(name)
On Error Goto 0
if (IsObject(requirement)) then
' element already exists
requirement.notes = ""
requirement.notes = requirement.notes&"<a href='"&jiraElement(5)&"'>"&jiraElement(0)&"</a>"&vbCR&vbLF
requirement.notes = requirement.notes&"Priority: "&jiraElement(1)&vbCR&vbLF
requirement.notes = requirement.notes&"Created: "&jiraElement(2)&vbCR&vbLF
requirement.notes = requirement.notes&"Assignee: "&jiraElement(3)&vbCR&vbLF
requirement.Update()
updated = updated + 1
else
Set requirement = currentPackage.Elements.AddNew(name,"Requirement")
requirement.notes = ""
requirement.notes = requirement.notes&"<a href='"&jiraElement(5)&"'>"&jiraElement(0)&"</a>"&vbCR&vbLF
requirement.notes = requirement.notes&"Priority: "&jiraElement(1)&vbCR&vbLF
requirement.notes = requirement.notes&"Created: "&jiraElement(2)&vbCR&vbLF
requirement.notes = requirement.notes&"Assignee: "&jiraElement(3)&vbCR&vbLF
requirement.Update()
currentPackage.Elements.Refresh()
Set dia_obj = currentDiagram.DiagramObjects.AddNew("l="&(10+x*200)&";t="&(10+y*50)&";b="&(10+y*50+44)&";r="&(10+x*200+180),"")
x = x + 1
if (x>3) then
x = 0
y = y + 1
end if
dia_obj.ElementID = requirement.ElementID
dia_obj.Update()
created = created + 1
end if
Loop
Set objShell = Nothing
WScript.echo "created "&created&" requirements"
WScript.echo "updated "&updated&" requirements"
End If
End Sub
' This sub routine checks if the format string defined in diagramAttributes
' does contain any characters. It replaces the known placeholders:
' %DIAGRAM_AUTHOR%, %DIAGRAM_CREATED%, %DIAGRAM_GUID%, %DIAGRAM_MODIFIED%,
' %DIAGRAM_NAME%, %DIAGRAM_NOTES%, %DIAGRAM_DIAGRAM_TYPE%, %DIAGRAM_VERSION%
' with the attribute values read from the EA diagram object.
' None, one or multiple number of placeholders can be used to create a diagram attribute
' to be added to the document. The attribute string is stored as a file with the same
' path and name as the diagram image, but with suffix .ad. So, it can
' easily be included in an asciidoc file.
Sub SaveDiagramAttribute(currentDiagram, path, diagramName)
If Len(diagramAttributes) > 0 Then
filledDiagAttr = diagramAttributes
set objFSO = CreateObject("Scripting.FileSystemObject")
filename = objFSO.BuildPath(path, diagramName & ".ad")
set objFile = objFSO.OpenTextFile(filename, ForWriting, True)
filledDiagAttr = Replace(filledDiagAttr, "%DIAGRAM_AUTHOR%", currentDiagram.Author)
filledDiagAttr = Replace(filledDiagAttr, "%DIAGRAM_CREATED%", currentDiagram.CreatedDate)
filledDiagAttr = Replace(filledDiagAttr, "%DIAGRAM_GUID%", currentDiagram.DiagramGUID)
filledDiagAttr = Replace(filledDiagAttr, "%DIAGRAM_MODIFIED%", currentDiagram.ModifiedDate)
filledDiagAttr = Replace(filledDiagAttr, "%DIAGRAM_NAME%", currentDiagram.Name)
filledDiagAttr = Replace(filledDiagAttr, "%DIAGRAM_NOTES%", currentDiagram.Notes)
filledDiagAttr = Replace(filledDiagAttr, "%DIAGRAM_DIAGRAM_TYPE%", currentDiagram.Type)
filledDiagAttr = Replace(filledDiagAttr, "%DIAGRAM_VERSION%", currentDiagram.Version)
filledDiagAttr = Replace(filledDiagAttr, "%NEWLINE%", vbCrLf)
objFile.WriteLine(filledDiagAttr)
objFile.Close
End If
End Sub
Sub SaveDiagram(currentModel, currentDiagram)
Dim exportDiagram ' As Boolean
' Open the diagram
Repository.OpenDiagram(currentDiagram.DiagramID)
' Save and close the diagram
set objFSO = CreateObject("Scripting.FileSystemObject")
If (currentModel.Name="Model") Then
' When we work with the default model, we don't need a sub directory
path = objFSO.BuildPath(exportDestination,"/images/ea/")
Else
path = objFSO.BuildPath(exportDestination,"/images/ea/" & NormalizeName(currentModel.Name) & "/")
End If
path = objFSO.GetAbsolutePathName(path)
MakeDir(path)
diagramName = currentDiagram.Name
diagramName = Replace(diagramName,vbCr,"")
diagramName = Replace(diagramName,vbLf,"")
diagramName = NormalizeName(diagramName)
filename = objFSO.BuildPath(path, diagramName & ".png")
exportDiagram = True
If objFSO.FileExists(filename) Then
WScript.echo " --- " & filename & " already exists."
If Len(additionalOptions) > 0 Then
If InStr(additionalOptions, "KeepFirstDiagram") > 0 Then
WScript.echo " --- Skipping export -- parameter 'KeepFirstDiagram' set."
Else
WScript.echo " --- Overwriting -- parameter 'KeepFirstDiagram' not set."
exportDiagram = False
End If
Else
WScript.echo " --- Overwriting -- parameter 'KeepFirstDiagram' not set."
End If
End If
If exportDiagram Then
projectInterface.SaveDiagramImageToFile(filename)
WScript.echo " extracted image to " & filename
If Not IsEmpty(diagramAttributes) Then
SaveDiagramAttribute currentDiagram, path, diagramName
End If
End If
Repository.CloseDiagram(currentDiagram.DiagramID)
' Write the note of the diagram
WriteNote currentModel, currentDiagram, currentDiagram.Notes, diagramName&"_notes"
For Each diagramElement In currentDiagram.DiagramObjects
Set currentElement = Repository.GetElementByID(diagramElement.ElementID)
WriteNote currentModel, currentElement, currentElement.Notes, diagramName&"_notes"
Next
For Each diagramLink In currentDiagram.DiagramLinks
set currentConnector = Repository.GetConnectorByID(diagramLink.ConnectorID)
WriteNote currentModel, currentConnector, currentConnector.Notes, diagramName&"_links"
Next
End Sub
'
' Recursively saves all diagrams under the provided package and its children
'
Sub DumpDiagrams(thePackage,currentModel)
Set currentPackage = thePackage
' export element notes
For Each currentElement In currentPackage.Elements
WriteNote currentModel, currentElement, currentElement.Notes, ""
' export connector notes
For Each currentConnector In currentElement.Connectors
' WScript.echo currentConnector.ConnectorGUID
if (currentConnector.ClientID=currentElement.ElementID) Then
WriteNote currentModel, currentConnector, currentConnector.Notes, ""
End If
Next
if (Not currentElement.CompositeDiagram Is Nothing) Then
SyncJira currentModel, currentElement.CompositeDiagram
SaveDiagram currentModel, currentElement.CompositeDiagram
End If
if (Not currentElement.Elements Is Nothing) Then
DumpDiagrams currentElement,currentModel
End If
Next
' Iterate through all diagrams in the current package
For Each currentDiagram In currentPackage.Diagrams
SyncJira currentModel, currentDiagram
SaveDiagram currentModel, currentDiagram
Next
' Process child packages
Dim childPackage 'as EA.Package
' otPackage = 5
if (currentPackage.ObjectType = 5) Then
For Each childPackage In currentPackage.Packages
call DumpDiagrams(childPackage, currentModel)
Next
End If
End Sub
Function SearchEAProjects(path)
For Each folder In path.SubFolders
SearchEAProjects folder
Next
For Each file In path.Files
If fso.GetExtensionName (file.Path) = "eap" OR fso.GetExtensionName (file.Path) = "eapx" OR fso.GetExtensionName (file.Path) = "qea" OR fso.GetExtensionName (file.Path) = "qeax" Then
WScript.echo "found "&file.path
If (Left(file.name, 1) = "_") Then
WScript.echo "skipping, because it start with `_` (replication)"
Else
OpenProject(file.Path)
End If
End If
Next
End Function
'Gets the package object as referenced by its GUID from the Enterprise Architect project.
'Looks for the model node, the package is a child of as it is required for the diagram export.
'Calls the Sub routine DumpDiagrams for the model and package found.
'An error is printed to console only if the packageGUID is not found in the project.
Function DumpPackageDiagrams(EAapp, packageGUID)
WScript.echo "DumpPackageDiagrams"
WScript.echo packageGUID
Dim package
Set package = EAapp.Repository.GetPackageByGuid(packageGUID)
If (package Is Nothing) Then
WScript.echo "invalid package - as package is not part of the project"
Else
Dim currentModel
Set currentModel = package
while currentModel.IsModel = false
Set currentModel = EAapp.Repository.GetPackageByID(currentModel.parentID)
wend
' Iterate through all child packages and save out their diagrams
' save all diagrams of package itself
call DumpDiagrams(package, currentModel)
End If
End Function
Function FormatStringToJSONString(inputString)
outputString = Replace(inputString, "\", "\\")
outputString = Replace(outputString, """", "\""")
outputString = Replace(outputString, vbCrLf, "\n")
outputString = Replace(outputString, vbLf, "\n")
outputString = Replace(outputString, vbCr, "\n")
FormatStringToJSONString = outputString
End Function
'If a valid file path is set, the glossary terms are read from EA repository,
'formatted in a JSON compatible format and written into file.
'The file is read and reformatted by the exportEA gradle task afterwards.
Function ExportGlossaryTermsAsJSONFile(EArepo)
If (Len(glossaryFilePath) > 0) Then
set objFSO = CreateObject("Scripting.FileSystemObject")
GUID = Replace(EArepo.ProjectGUID,"{","")
GUID = Replace(GUID,"}","")
currentGlossaryFile = objFSO.BuildPath(glossaryFilePath,"/"&GUID&".ad")
set objFile = objFSO.OpenTextFile(currentGlossaryFile,ForAppending, True)
Set glossary = EArepo.Terms()
objFile.WriteLine("[")
dim counter
counter = 0
For Each term In glossary
if (counter > 0) Then
objFile.Write(",")
end if
objFile.Write("{ ""term"" : """&FormatStringToJSONString(term.term)&""", ""meaning"" : """&FormatStringToJSONString(term.Meaning)&""",")
objFile.WriteLine(" ""termID"" : """&FormatStringToJSONString(term.termID)&""", ""type"" : """&FormatStringToJSONString(term.type)&""" }")
counter = counter + 1
Next
objFile.WriteLine("]")
objFile.Close
End If
End Function
Sub OpenProject(file)
' open Enterprise Architect
Set EAapp = CreateObject("EA.App")
WScript.echo "opening Enterprise Architect. This might take a moment..."
' load project
EAapp.Repository.OpenFile(file)
' make Enterprise Architect to not appear on screen
EAapp.Visible = False
' get repository object
Set Repository = EAapp.Repository
' Show the script output window
' Repository.EnsureOutputVisible("Script")
call ExportGlossaryTermsAsJSONFile(Repository)
Set projectInterface = Repository.GetProjectInterface()
Dim childPackage 'As EA.Package
' Iterate through all model nodes
Dim currentModel 'As EA.Package
If (InStrRev(file,"{") > 0) Then
' the filename references a GUID
' like {04C44F80-8DA1-4a6f-ECB8-982349872349}
WScript.echo file
GUID = Mid(file, InStrRev(file,"{")+0,38)
WScript.echo GUID
' Iterate through all child packages and save out their diagrams
call DumpPackageDiagrams(EAapp, GUID)
Else
If packageFilter.Count = 0 Then
WScript.echo "done"
' Iterate through all model nodes
For Each currentModel In Repository.Models
' Iterate through all child packages and save out their diagrams
For Each childPackage In currentModel.Packages
call DumpDiagrams(childPackage,currentModel)
Next
Next
Else
' Iterate through all packages found in the package filter given by script parameter.
For Each packageGUID In packageFilter
call DumpPackageDiagrams(EAapp, packageGUID)
Next
End If
End If
EAapp.Repository.CloseFile()
' Since EA 15.2 the Enterprise Architect background process hangs without calling Exit explicitly
On Error Resume Next
EAapp.Repository.CloseFile()
EAapp.Repository.Exit()
EAapp.Repository = null
' end fix EA
End Sub
Private connectionString
Private packageFilter
Private exportDestination
Private searchPath
Private glossaryFilePath
Private diagramAttributes
Private additionalOptions
exportDestination = "./src/docs"
searchPath = "./src"
Set packageFilter = CreateObject("System.Collections.ArrayList")
Set objArguments = WScript.Arguments
Dim argCount
argCount = 0
While objArguments.Count > argCount+1
Select Case objArguments(argCount)
Case "-c"
connectionString = objArguments(argCount+1)
Case "-p"
packageFilter.Add objArguments(argCount+1)
Case "-d"
exportDestination = objArguments(argCount+1)
Case "-s"
searchPath = objArguments(argCount+1)
Case "-g"
glossaryFilePath = objArguments(argCount+1)
Case "-da"
diagramAttributes = objArguments(argCount+1)
Case "-ao"
additionalOptions = objArguments(argCount+1)
End Select
argCount = argCount + 2
WEnd
set fso = CreateObject("Scripting.fileSystemObject")
WScript.echo "Image extractor"
' Check both types in parallel - 1st check Enterprise Architect database connection, 2nd look for local project files
If Not IsEmpty(connectionString) Then
WScript.echo "opening database connection now"
OpenProject(connectionString)
End If
WScript.echo "looking for .eap(x) and .qea(x) files in " & fso.GetAbsolutePathName(searchPath)
' Dim f As Scripting.Files
SearchEAProjects fso.GetFolder(searchPath)
WScript.echo "finished exporting images"
3.16. exportVisio
1 minute to read
At a Glance
About This Task
This task searches for Visio files in the /src/docs
folder then exports all diagrams and element notes to /src/docs/images/visio
and /src/docs/visio
.
Images are stored as /images/visio/[filename]-[pagename].png
. Notes are stored as /visio/[filename]-[pagename].adoc
You can specify a filename to export notes to by starting any comment with {adoc:[filename].adoc}
.
It will then be written to /visio/[filename].adoc
.
Important Information About This Task
-
Currently, only Visio files stored directly in
/src/docs
are supported. All others will export to the wrong location. -
Before running this task, close any open Visio instance.
Further Reading and Resources
Source
task exportVisio(
dependsOn: [streamingExecute],
description: 'exports all diagrams and notes from visio files',
group: 'docToolchain'
) {
doLast {
//make sure path for notes exists
//and remove old notes
new File(docDir, 'src/docs/visio').deleteDir()
//also remove old diagrams
new File(docDir, 'src/docs/images/visio').deleteDir()
//create a readme to clarify things
def readme = """This folder contains exported diagrams and notes from visio files.
Please note that these are generated files but reside in the `src`-folder in order to be versioned.
This is to make sure that they can be used from environments other than windows.
# Warning!
**The contents of this folder will be overwritten with each re-export!**
use `gradle exportVisio` to re-export files
"""
new File(docDir, 'src/docs/images/visio/.').mkdirs()
new File(docDir, 'src/docs/images/visio/readme.ad').write(readme)
new File(docDir, 'src/docs/visio/.').mkdirs()
new File(docDir, 'src/docs/visio/readme.ad').write(readme)
def sourcePath = new File(docDir, 'src/docs/.').canonicalPath
def scriptPath = new File(projectDir, 'scripts/VisioPageToPngConverter.ps1').canonicalPath
"powershell ${scriptPath} -SourcePath ${sourcePath}".executeCmd()
}
}
# Convert all pages in all visio files in the given directory to png files.
# A Visio windows might flash shortly.
# The converted png files are stored in the same directory
# The name of the png file is concatenated from the Visio file name and the page name.
# In addtion all the comments are stored in adoc files.
# If the Viso file is named "MyVisio.vsdx" and the page is called "FirstPage"
# the name of the png file will be "MyVisio-FirstPage.png" and the comment will
# be stored in "MyVisio-FirstPage.adoc".
# But for the name of the adoc files there is an alternative. It can be given in the first
# line of the comment. If it is given in the comment it has to be given in curly brackes
# with the prefix "adoc:", e.g. {adoc:MyCommentFile.adoc}
# Prerequisites: Viso and PowerShell has to be installed on the computer.
# Parameter: SourcePath where visio files can be found
# Example powershell VisoPageToPngConverter.ps1 -SourcePath c:\convertertest\
Param
(
[Parameter(Mandatory=$true,ValueFromPipeline=$true,Position=0)]
[Alias('p')][String]$SourcePath
)
Write-Output "starting to export visio"
If (!(Test-Path -Path $SourcePath))
{
Write-Warning "The path ""$SourcePath"" does not exist or is not accessible, please input the correct path."
Exit
}
# Extend the source path to get only Visio files of the given directory and not in subdircetories
If ($SourcePath.EndsWith("\"))
{
$SourcePath = "$SourcePath"
}
Else
{
$SourcePath = "$SourcePath\"
}
$VisioFiles = Get-ChildItem -Path "$SourcePath*" -Recurse -Include *.vsdx,*.vssx,*.vstx,*.vxdm,*.vssm,*.vstm,*.vsd,*.vdw,*.vss,*.vst
If(!($VisioFiles))
{
Write-Warning "There are no Visio files in the path ""$SourcePath""."
Exit
}
$VisioApp = New-Object -ComObject Visio.Application
$VisioApp.Visible = $false
# Extract the png from all the files in the folder
Foreach($File in $VisioFiles)
{
$FilePath = $File.FullName
Write-Output "found ""$FilePath"" ."
$FileDirectory = $File.DirectoryName # Get the folder containing the Visio file. Will be used to store the png and adoc files
$FileBaseName = $File.BaseName -replace '[ :/\\*?|<>]','-' # Get the filename to be used as part of the name of the png and adoc files
Try
{
$Document = $VisioApp.Documents.Open($FilePath)
$Pages = $VisioApp.ActiveDocument.Pages
Foreach($Page in $Pages)
{
# Create valid filenames for the png and adoc files
$PngFileName = $Page.Name -replace '[ :/\\*?|<>]','-'
$PngFileName = "$FileBaseName-$PngFileName.png"
$AdocFileName = $PngFileName.Replace(".png", ".adoc")
#TODO: this needs better logic
Write-Output("$SourcePath\images\visio\$PngFileName")
$Page.Export("$SourcePath\images\visio\$PngFileName")
$AllPageComments = ""
ForEach($PageComment in $Page.Comments)
{
# Extract adoc filename from comment text if the syntax is valid
# Remove the filename from the text and save the comment in a file with a valid name
$EofStringIndex = $PageComment.Text.IndexOf(".adoc}")
if ($PageComment.Text.StartsWith("{adoc") -And ($EofStringIndex -gt 6))
{
$AdocFileName = $PageComment.Text.Substring(6, $EofStringIndex -1)
$AllPageComments += $PageComment.Text.Substring($EofStringIndex + 6)
}
else
{
$AllPageComments += $PageComment.Text+"`n"
}
}
If ($AllPageComments)
{
$AdocFileName = $AdocFileName -replace '[:/\\*?|<>]','-'
#TODO: this needs better logic
$stream = [System.IO.StreamWriter] "$SourcePath\visio\$AdocFileName"
$stream.WriteLine($AllPageComments)
$stream.close()
}
}
$Document.Close()
}
Catch
{
if ($Document)
{
$Document.Close()
}
Write-Warning "One or more visio page(s) in file ""$FilePath"" have been lost in this converting."
Write-Warning "Error was: $_"
}
}
$VisioApp.Quit()
3.17. exportDrawIo
2 minutes to read
About This Task
There is no exportDrawIo
task available in docToolchain because such a task is not required.
You can continue to use diagrams.net (formerly known as draw.io) to edit your diagrams simply by making a change to your diagram-authoring workflow.
About diagrams.net
diagrams.net offers free and open source desktop editors for all major operating system platforms.
Visit https://www.diagrams.net/integrations to find a desktop editor application compatible with your operating system.
When you use the desktop version, just create your diagram with the .png
(or even better, .dio.png
) extension and diagrams.net will always save your diagram as a PNG with the source as metadata.
They have also launched a free plugin for VS Code and IntelliJ so you can edit your diagrams offline!
How to Change Your Workflow to Use diagrams.net
Export your diagrams.net/draw.io diagrams as a PNG with the source embedded in the file metadata.
This allows you to embed your diagrams into AsciiDoc source as you normally would (using the image::
macro) with the added advantage of storing the diagram source with the image itself.
How to Convert a Confluence Page to AsciiDoc
If you are converting a Confluence page with embedded draw.io diagrams to AsciiDoc, use this export workflow to continue using diagrams.net:
-
Export an editable PNG diagram from Confluence.
-
Load the diagram you want to export from Confluence.
-
Click
. -
In the Image modal, make sure that Include a copy of my diagram is selected.
-
Click Export to save the PNG file with the pattern
[file].dio.png
. -
Commit the exported PNG file to source control.
Your diagram can now be managed in source control, added to your documentation source and edited using a diagrams.net desktop version.
Specifying .dio (short for "drawio") in the name will help you identify PNG files containing an embedded XML diagram source.
|
3.18. exportChangeLog
2 minutes to read
At a Glance
About This Task
As the name suggests, this task exports the changelog to be referenced from within your documentation, if needed.
The changelog is written to build/docs/changelog.adoc
.
This task can be configured to use a different source control system or a different directory.
To configure this task, copy template_config/scripts/ChangelogConfig.groovy
to your directory and modify to suit your needs.
Then use -PchangelogConfigFile=<your config file> to add the path to your configuration file to the task.
See the description inside the template for more details.
By default, the source is the Git changelog for the path src/docs
and only contains the commit messages for changes made to the documentation.
All changes to the build or other sources in the repository will not show up.
By default, the changelog contains changes made to date, author and commit message already formatted as AsciiDoc table content:
| 09.04.2017 | Ralf D. Mueller | fix #24 template updated to V7.0 | 08.04.2017 | Ralf D. Mueller | fixed typo
You simply include it like this:
.Changes [options="header",cols="1,2,6"] |==== | Date | Author | Comment include::../../build/docs/changelog.adoc[] |====
By excluding the table definition, you can easily translate the table headings through different text snippets.
In a future docToolchain release, you will have the ability to include only certain commit messages from the changelog and exclude others (starting with # or // ?).
This feature is not available just yet.
|
Further Reading and Resources
The only constant in life is change blog post.
Source
task exportChangeLog(
description: 'exports the change log from a git subpath',
group: 'docToolchain'
) {
doFirst {
new File(targetDir).mkdirs()
}
doLast {
logger.info("docToolchain> docDir: "+docDir)
logger.info("docToolchain> mainConfigFile: "+mainConfigFile)
def config = new ConfigSlurper().parse(new File(docDir, mainConfigFile).text)
def cmd = "${config.changelog.cmd} ."
def changes = cmd.execute(null, new File(docDir, config.changelog.dir)).text
def changelog = new File(targetDir, 'changelog.adoc')
logger.info "> changelog exported ${changelog.canonicalPath}"
changelog.write(changes)
}
}
3.19. exportContributors
3 minutes to read
About This Task
This task crawls through all Asciidoctor source files and extracts a list of contributors, which is then rendered as AsciiDoc images of each contributor’s gravatar picture.
The extracted list is stored in /opt/build/repo/build/contributors/015_tasks/03_task_exportContributors.adoc
so it can be easily included in your documents.
How to Use This Task
The best way to use this task is to create a feedback.adoc
file similar to this:
ifndef::backend-pdf[] (1)
image::https://img.shields.io/badge/improve-this%20doc-orange.svg[link={manualdir}{filename}, float=right] (2)
image::https://img.shields.io/badge/create-an%20issue-blue.svg[link="https://github.com/docToolchain/documentation/issues/new?title=&body=%0A%0A%5BEnter%20feedback%20here%5D%0A%0A%0A---%0A%23page:{filename}", float=right] (3)
endif::[]
include::{targetDir}/contributors/{filename}[] (4)
1 | Do not show this section when docs are rendered as PDF. |
2 | Create an Improve This Doc button which links to your GitHub sources. |
3 | Create a Create an Issue button which links to your issue tracker. |
4 | Include the list of contributors created by this task. |
(The task automatically adds the estimated reading time to the list of contributors.)
About the Avatar-Icons
It seems not to be possible to extract a link to the github avatar icons from the log. So, the solution is to use Gravatar icons. For this to work, the contributors email address is hashed and an icon link is generated from that hash.
http://www.gravatar.com/avatar/cc5f3bf8b3cb91c985ed4fd046aa451d?d=identicon
This result at least in an icon which has a distinct color.
Contributors can setup their own image through Gravatar.com. For this to work, the git commits need to use an email address which can be verified by Gravatar.com. Unfortunately, this is not the case if a contributor decided to make his email address private in the email sections of her github account.
File Attributes
This task also exports some GitHub file attributes.
The extracted attributes are stored in /opt/build/repo/build/fileattribs/015_tasks/03_task_exportContributors.adoc
.
:lastUpdated: 16.05.2019 06:22
:lastAuthorName: Ralf D. Müller
:lastAuthorEmail: ralf.d.mueller@gmail.com
:lastAuthorAvatar: http://www.gravatar.com/avatar/cc5f3bf8b3cb91c985ed4fd046aa451d?d=identicon[32,32,role='gravatar',alt='Ralf D. Müller',title='Ralf D. Müller']
:lastMessage: #310 started to document config options
You can import and use these attributes in the same way as you import the contributors list.
please make sure that you do not accidentally publish the email address if your contributors do not want it. |
For example:
include::{targetDir}/fileattribs/{filename}[]
Last updated {lastUpdated} by {lastAuthorName}
3.20. exportJiraIssues
3 minutes to read
At a Glance
About This Task
This task exports all issues for a given query or queries from Jira as either an AsciiDoc table, an Excel file or both.
The configuration for this task can be found within Config.gradle
(gradle.properties
can be used as a fallback).
Username/password is deprecated, so you need to use username/API-token instead.
An API-token can be created through https://id.atlassian.com/manage/api-tokens. We recommend that you keep username and API-token out of your GitHub repository, and instead pass them as environment variables to docToolchain.
Migrate configuration to version >= 3.2.0
Since version 3.2.0, the configuration requests
is deprecated. Please migrate to and use exports
instead. The old configuration will be removed in the near future.
To migrate your configuration, replace the JiraRequest class with a Map. The following example shows how to migrate a configuration with a single JiraRequest to the new configuration:
jira.requests = [
new JiraRequest(
filename: 'jiraIssues',
jql: 'project = %jiraProject% AND labels = %jiraLabel%',
customfields: [
'customfield_10026': 'StoryPoints'
]
)
]
will be migrated to:
jira.exports = [
[
filename: 'jiraIssues',
jql: 'project = %jiraProject% AND labels = %jiraLabel%',
customfields: [
'customfield_10026': 'StoryPoints'
]
]
]
Configuration
Jira configuration support list requests to Jira where results of each requests will be saved in a file with specifies filename. Flags saveAsciidoc & saveExcel allow you to easily configure the format in which results should be saved.
Deprecation Notice
-
The old configuration was based on the single Jira query is deprecated (single 'jql' parameter). Support for it will be removed in the near future. Please migrate to the new configuration which allows multiple Jira queries.
-
Since version 3.2.0, the configuration
requests
is deprecated. Please migrate to and useexports
instead. The old configuration will be removed in the near future.
Configuration Options
exports (since 3.2.0), List of Maps that contains the following keys:
-
filename
: The filename of the exported file. The file extension will be added automatically. -
jql
: The Jira query to be executed. Can have placeholders that are interpolated. Allowed placeholders are:%jiraProject%
(interpolated withjira.project
),%jiraLabel%
(interpolated withjira.label
) -
customfields
: A Map of custom fields to be included in the export. Key is the technical name of the custom field in Jira, value is the name of the column in the export.
rateLimit (since 3.2.0), The rate limit for Jira requests. Default is 10 requests per second.
requests (deprecated since 3.2.0, please use exports
instead),
List of JiraRequest that has the following properties:
class JiraRequest {
String filename //filename (without extension) of the file in which JQL results will be saved. Extension will be determined automatically for Asciidoc or Excel file
String jql // Jira Query Language syntax
Map<String,String> customfields // map of customFieldId:displayName values for Jira fields which don't have default names, i.e. customfield_10026:StoryPoints
}
Full configuration options:
// Configuration for Jira related tasks
jira = [:]
jira.with {
// endpoint of the JiraAPI (REST) to be used
api = 'https://your-jira-instance'
// requests per second for Jira API calls
rateLimit = 10
/*
WARNING: It is strongly recommended to store credentials securely instead of commiting plain text values to your git repository!!!
Tool expects credentials that belong to an account which has the right permissions to read the JIRA issues for a given project.
Credentials can be used in a form of:
- passed parameters when calling script (-PjiraUser=myUsername -PjiraPass=myPassword) which can be fetched as a secrets on CI/CD or
- gradle variables set through gradle properties (uses the 'jiraUser' and 'jiraPass' keys)
Often, Jira & Confluence credentials are the same, in which case it is recommended to pass CLI parameters for both entities as
-Pusername=myUser -Ppassword=myPassword
*/
// the key of the Jira project
project = 'PROJECTKEY'
// the format of the received date time values to parse
dateTimeFormatParse = "yyyy-MM-dd'T'H:m:s.SSSz" // i.e. 2020-07-24'T'9:12:40.999 CEST
// the format in which the date time should be saved to output
dateTimeFormatOutput = "dd.MM.yyyy HH:mm:ss z" // i.e. 24.07.2020 09:02:40 CEST
// the label to restrict search to
label = 'label1'
// Legacy settings for Jira query. This setting is deprecated & support for it will soon be completely removed. Please use JiraRequests settings
jql = "project='%jiraProject%' AND labels='%jiraLabel%' ORDER BY priority DESC, duedate ASC"
// Base filename in which Jira query results should be stored
resultsFilename = 'JiraTicketsContent'
saveAsciidoc = true // if true, asciidoc file will be created with *.adoc extension
saveExcel = true // if true, Excel file will be created with *.xlsx extension
// Output folder for this task inside main outputPath
resultsFolder = 'JiraRequests'
/*
List of requests to Jira API:
These are basically JQL expressions bundled with a filename in which results will be saved.
User can configure custom fields IDs and name those for column header,
i.e. customfield_10026:'Story Points' for Jira instance that has custom field with that name and will be saved in a coloumn named "Story Points"
*/
exports = [
[
filename:"File1_Done_issues",
jql:"project='%jiraProject%' AND status='Done' ORDER BY duedate ASC",
customfields: [customfield_10026:'Story Points']
],
[
filename:'CurrentSprint',
jql:"project='%jiraProject%' AND Sprint in openSprints() ORDER BY priority DESC, duedate ASC",
customfields: [customfield_10026:'Story Points']
]
]
}
Source
task exportJiraIssues(
description: 'exports all jira issues from a given search',
group: 'docToolchain'
) {
doLast {
config.targetDir = targetDir
new ExportJiraIssuesTask(config).execute()
}
}
3.21. exportJiraSprintChangelogIssues
1 minute to read
About This Task
This task exports a simplified (key and summary) list of Jira issues for a specific sprint defined in the task configuration. Only a few additional fields (such as assignee) can be switched using configuration flags.
Once you define the sprint, the relevant AsciiDoc and Excel files will be generated. If a sprint is not defined in the configuration, changelogs for all sprints that match the configuration will be saved in separate AsciiDoc files and in different tabs within an Excel file.
The task configuration can be found within Config.gradle
. In addition to the configuration snippet below, it is important to configure the Jira API and credentials in the Jira section of the configuration inside the same file.
Configuration
// Sprint changelog configuration generate changelog lists based on tickets in sprints of an Jira instance.
// This feature requires at least Jira API & credentials to be properly set in Jira section of this configuration
sprintChangelog = [:]
sprintChangelog.with {
sprintState = 'closed' // it is possible to define multiple states, i.e. 'closed, active, future'
ticketStatus = "Done, Closed" // it is possible to define multiple ticket statuses, i.e. "Done, Closed, 'in Progress'"
showAssignee = false
showTicketStatus = false
showTicketType = true
sprintBoardId = 12345 // Jira instance probably have multiple boards; here it can be defined which board should be used
// Output folder for this task inside main outputPath
resultsFolder = 'Sprints'
// if sprintName is not defined or sprint with that name isn't found, release notes will be created on for all sprints that match sprint state configuration
sprintName = 'PRJ Sprint 1' // if sprint with a given sprintName is found, release notes will be created just for that sprint
allSprintsFilename = 'Sprints_Changelogs' // Extension will be automatically added.
}
Source
task exportJiraSprintChangelog(
description: 'exports all jira issues from Sprint for release notes',
group: 'docToolchain'
) {
doLast {
config.targetDir = targetDir
new ExportJiraSprintChangelogTask(config).execute()
}
}
3.22. exportPPT
1 minute to read
At a Glance
About This Task
This task lets you export a series of PowerPoint slides to be used within your AsciiDoc documentation. It is currently a Windows-only task.
It exports the slides as .jpg
files and the speaker notes as one .adoc
file.
The tag {slide}
within the speaker notes will be replaced with the corresponding image reference.
This will help you to get a stable result, even when you insert or delete slides.
Use the tagged regions (//tag::[
) feature of asciidoctor] to include only certain slides or parts of your speaker notes.
Further Reading and Resources
-
Read the Do More with Slides blog post.
-
Find more information about the Windows-only aspect of this task in this issue.
-
Check out asciidoctorj-office-extension for another way to use PPT slides in your docs.
Source
task exportPPT(
dependsOn: [streamingExecute],
description: 'exports all slides and some texts from PPT files',
group: 'docToolchain'
) {
doLast {
File sourceDir = file(srcDir)
logger.info("sourceDir: ${sourceDir}")
//make sure path for notes exists
//and remove old notes
new File(sourceDir, 'ppt').deleteDir()
//also remove old diagrams
new File(sourceDir, 'images/ppt').deleteDir()
//create a readme to clarify things
def readme = """This folder contains exported slides or notes from .ppt presentations.
Please note that these are generated files but reside in the `src`-folder in order to be versioned.
This is to make sure that they can be used from environments other than windows.
# Warning!
**The contents of this folder will be overwritten with each re-export!**
use `gradle exportPPT` to re-export files
"""
new File(sourceDir, 'images/ppt/.').mkdirs()
new File(sourceDir, 'images/ppt/readme.ad').write(readme)
new File(sourceDir, 'ppt/.').mkdirs()
new File(sourceDir, 'ppt/readme.ad').write(readme)
def searchPath = new File(sourceDir, 'ppt')
//execute through cscript in order to make sure that we get WScript.echo right
"%SystemRoot%\\System32\\cscript.exe //nologo ${projectDir}/scripts/exportPPT.vbs -s ${sourceDir.absolutePath}".executeCmd()
}
}
Const ForAppending = 8
Const ppPlaceholderBody = 2
' Helper
' http://windowsitpro.com/windows/jsi-tip-10441-how-can-vbscript-create-multiple-folders-path-mkdir-command
Function MakeDir (strPath)
Dim strParentPath, objFSO
Set objFSO = CreateObject("Scripting.FileSystemObject")
On Error Resume Next
strParentPath = objFSO.GetParentFolderName(strPath)
If Not objFSO.FolderExists(strParentPath) Then MakeDir strParentPath
If Not objFSO.FolderExists(strPath) Then objFSO.CreateFolder strPath
On Error Goto 0
MakeDir = objFSO.FolderExists(strPath)
End Function
Function SearchPresentations(path)
For Each folder In path.SubFolders
SearchPresentations folder
Next
For Each file In path.Files
If (Left(fso.GetExtensionName (file.Path), 3) = "ppt") OR (Left(fso.GetExtensionName (file.Path), 3) = "pps") Then
WScript.echo "found "&file.path
ExportSlides(file.Path)
End If
Next
End Function
Sub ExportSlides(sFile)
Set objRegEx = CreateObject("VBScript.RegExp")
objRegEx.Global = True
objRegEx.IgnoreCase = True
objRegEx.MultiLine = True
' "." doesn't work for multiline in vbs, "[\s,\S]" does...
objRegEx.Pattern = "[\s,\S]*{adoc}"
' http://www.pptfaq.com/FAQ00481_Export_the_notes_text_of_a_presentation.htm
strFileName = fso.GetFIle(sFile).Name
Err.Clear
Set oPPT = CreateObject("PowerPoint.Application")
Set oPres = oPPT.Presentations.Open(sFile, True, False, False) ' Read Only, No Title, No Window
On Error resume next
Set oSlides = oPres.Slides
WScript.echo "number slides: "&oSlides.Count
strNotesText = ""
strImagePath = "/images/ppt/" & strFileName & "/"
MakeDir(searchPath & strImagePath)
strNotesPath = "/ppt/"
MakeDir(searchPath & strNotesPath)
For Each oSl In oSlides
strSlideName = oSl.Name
'WScript.echo fso.GetAbsolutePathName(searchPath) & strImagePath & strSlideName & ".jpg"
oSl.Export fso.GetAbsolutePathName(searchPath) & strImagePath & strSlideName & ".jpg", ".jpg"
For Each oSh In oSl.NotesPage.Shapes
If oSh.PlaceholderFormat.Type = ppPlaceholderBody Then
If oSh.HasTextFrame Then
If oSh.TextFrame.HasText Then
strCurrentNotes = oSh.TextFrame.TextRange.Text
strCurrentNotes = Replace(strCurrentNotes,vbVerticalTab, vbCrLf)
strCurrentNotes = Replace(strCurrentNotes,"{slide}","image::ppt/"&strFileName&"/"&strSlideName&".jpg[]")
' remove speaker notes before marker "{adoc}"
strCurrentNotes = objRegEx.Replace(strCurrentNotes,"")
strNotesText = strNotesText & vbCrLf & strCurrentNotes & vbCrLf & vbCrLf
End If
End If
End If
Next
Next
' WScript.echo fso.GetAbsolutePathName(".") & strNotesPath&""&strFileName&".ad"
' http://stackoverflow.com/questions/2524703/save-text-file-utf-8-encoded-with-vba
Set fsT = CreateObject("ADODB.Stream")
fsT.Type = 2 'Specify stream type - we want To save text/string data.
fsT.Charset = "utf-8" 'Specify charset For the source text data.
fsT.Open 'Open the stream And write binary data To the object
fsT.WriteText "ifndef::imagesdir[:imagesdir: ../../images]"&vbCrLf&CStr(strNotesText)
fsT.SaveToFile fso.GetAbsolutePathName(searchPath) & strNotesPath&""&strFileName&".ad", 2 'Save binary data To disk
oPres.Close()
oPPT.Quit()
If Err.Number <> 0 Then
WScript.Echo "Error: " & Err.Number
WScript.Echo "Error (Hex): " & Hex(Err.Number)
WScript.Echo "Source: " & Err.Source
WScript.Echo "Description: " & Err.Description
Err.Clear ' Clear the Error
End If
End Sub
set fso = CreateObject("Scripting.fileSystemObject")
WScript.echo "Slide extractor"
Set objArguments = WScript.Arguments
Dim argCount
argCount = 0
While objArguments.Count > argCount+1
Select Case objArguments(argCount)
Case "-s"
searchPath = objArguments(argCount+1)
End Select
argCount = argCount + 2
WEnd
WScript.echo "looking for .ppt files in " & fso.GetAbsolutePathName(searchPath)
SearchPresentations fso.GetFolder(searchPath)
WScript.echo "finished exporting slides"
3.23. exportExcel
2 minutes to read
At a Glance
About This Task
Sometimes you need to include tabular data in your documentation.
Most likely, this data will be stored as a MS Excel spreadsheet, or you may like to use Excel to create and edit it.
Either way, this task lets you export an Excel spreadsheet and include it directly in your docs.
It searches for .xlsx
files and exports each contained worksheet as .csv
and as .adoc
.
Note that formulas contained in your spreadsheet are evaluated and exported statically.
The generated files are written to src/excel/[filename]/[worksheet].(adoc|cvs)
.
The src
folder is used instead of the build
folder because a better history of worksheet changes is captured.
The files can be included either as AsciiDoc:
include::excel/Sample.xlsx/Numerical.adoc[]
…or as a CSV file:
[options="header",format="csv"] |=== include::excel/Sample.xlsx/Numerical.csv[] |===
The AsciiDoc version gives you a bit more control because the following are preserved:
-
Horizontal and vertical alignment.
-
col-span and row-span.
-
Line breaks.
-
Column width relative to other columns.
-
Background colors.
Further Reading and Resources
See asciidoctorj-office-extension to learn another way to use Excel spreadsheets in your docs.
Source
task exportExcel(
description: 'exports all excelsheets to csv and AsciiDoc',
group: 'docToolchain'
) {
doFirst {
File sourceDir = file(srcDir)
def tree = fileTree(srcDir).include('**/*.xlsx').exclude('**/~*')
def exportFileDir = new File(sourceDir, 'excel')
//make sure path for notes exists
exportFileDir.deleteDir()
//create a readme to clarify things
def readme = """This folder contains exported workbooks from Excel.
Please note that these are generated files but reside in the `src`-folder in order to be versioned.
This is to make sure that they can be used from environments other than windows.
# Warning!
**The contents of this folder will be overwritten with each re-export!**
use `gradle exportExcel` to re-export files
"""
exportFileDir.mkdirs()
new File(exportFileDir, '/readme.ad').write(readme)
}
doLast {
File sourceDir = file(srcDir)
def exportFileDir = new File(sourceDir, 'excel')
def tree = fileTree(srcDir).include('**/*.xlsx').exclude('**/~*')
def nl = System.getProperty("line.separator")
def export = { sheet, evaluator, targetFileName ->
def targetFileCSV = new File(targetFileName + '.csv')
def targetFileAD = new File(targetFileName + '.adoc')
def df = new org.apache.poi.ss.usermodel.DataFormatter();
def regions = []
sheet.numMergedRegions.times {
regions << sheet.getMergedRegion(it)
}
logger.debug "sheet contains ${regions.size()} regions"
def color = ''
def resetColor = false
def numCols = 0
def headerCreated = false
def emptyRows = 0
for (int rowNum=0; rowNum<=sheet.lastRowNum; rowNum++) {
def row = sheet.getRow(rowNum)
if (row && !headerCreated) {
headerCreated = true
// create AsciiDoc table header
def width = []
numCols = row.lastCellNum
numCols.times { columnIndex ->
width << sheet.getColumnWidth((int) columnIndex)
}
//lets make those numbers nicer:
width = width.collect { Math.round(100 * it / width.sum()) }
targetFileAD.append('[options="header",cols="' + width.join(',') + '"]' + nl)
targetFileAD.append('|===' + nl)
}
def data = []
def style = []
def colors = []
// For each row, iterate through each columns
if (row && (row?.lastCellNum!=-1)) {
numCols.times { columnIndex ->
def cell = row.getCell(columnIndex)
if (cell) {
def cellValue = df.formatCellValue(cell, evaluator)
if (cellValue.startsWith('*') && cellValue.endsWith('\u20AC')) {
// Remove special characters at currency
cellValue = cellValue.substring(1).trim();
}
def cellStyle = ''
def region = regions.find { it.isInRange(cell.rowIndex, cell.columnIndex) }
def skipCell = false
if (region) {
//check if we are in the upper left corner of the region
if (region.firstRow == cell.rowIndex && region.firstColumn == cell.columnIndex) {
def colspan = 1 + region.lastRow - region.firstRow
def rowspan = 1 + region.lastColumn - region.firstColumn
if (rowspan > 1) {
cellStyle += "${rowspan}"
}
if (colspan > 1) {
cellStyle += ".${colspan}"
}
cellStyle += "+"
} else {
skipCell = true
}
}
if (!skipCell) {
switch (cell.cellStyle.getCellAlignment().getHorizontal().toString()) {
case 'RIGHT':
cellStyle += '>'
break
case 'CENTER':
cellStyle += '^'
break
}
switch (cell.cellStyle.getCellAlignment().getVertical().toString()) {
case 'BOTTOM':
cellStyle += '.>'
break
case 'CENTER':
cellStyle += '.^'
break
}
color = cell.cellStyle.fillForegroundXSSFColor?.RGB?.encodeHex()
color = color != null ? nl + "{set:cellbgcolor:#${color}}" : ''
data << cellValue
if (color == '' && resetColor) {
colors << nl + "{set:cellbgcolor!}"
resetColor = false
} else {
colors << color
}
if (color != '') {
resetColor = true
}
style << cellStyle
} else {
data << ""
colors << ""
style << "skip"
}
} else {
data << ""
colors << ""
style << ""
}
}
emptyRows = 0
} else {
if (emptyRows<3) {
//insert empty row
numCols.times {
data << ""
colors << ""
style << ""
}
emptyRows++
} else {
break
}
}
targetFileCSV.append(data
.collect {
"\"${it.replaceAll('"', '""')}\""
}
.join(',') + nl, 'UTF-8')
// fix #1192 https://github.com/docToolchain/docToolchain/issues/1192
// remove unnecessary spans which break Asciidoctor rendering
def prev = ''
def removed = []
def useRemoved = true
style.eachWithIndex { s, i ->
if (s!="skip") {
if (s.contains('+')) {
def span = s.split('[+]')[0].split('[.]')
def current = ""
if (span.size()>1) {
current = span[1]
}
if (span[0] != '') {
removed << span[0] + '+' + s.split('[+]')[1]
} else {
removed << s.split('[+]')[1]
}
if (i > 0) {
if (current != prev) {
useRemoved = false
}
}
prev = current
} else {
removed << s
useRemoved = false
}
} else {
removed << "skip"
}
}
if (useRemoved) { style = removed }
// fix #1192 https://github.com/docToolchain/docToolchain/issues/1192
targetFileAD.append(data
.withIndex()
.collect { value, index ->
if (style[index] == "skip") {
""
} else {
style[index] + "| ${value.replaceAll('[|]', '{vbar}').replaceAll("\n", ' +$0') + colors[index]}"
}
}
.join(nl) + nl * 2, 'UTF-8')
}
targetFileAD.append('|===' + nl)
// rewrite file to remove consecutive nl
targetFileAD.write(targetFileAD.text.replaceAll("(?m)(\\r?\\n){2,}", nl+nl))
}
tree.each { File excel ->
println "file: " + excel
def excelDir = new File(exportFileDir, excel.getName())
excelDir.mkdirs()
InputStream inp
inp = new FileInputStream(excel)
def wb = org.apache.poi.ss.usermodel.WorkbookFactory.create(inp);
def evaluator = wb.getCreationHelper().createFormulaEvaluator();
for (int wbi = 0; wbi < wb.getNumberOfSheets(); wbi++) {
def sheetName = wb.getSheetAt(wbi).getSheetName()
println " -- sheet: " + sheetName
def targetFile = new File(excelDir, sheetName)
export(wb.getSheetAt(wbi), evaluator, targetFile.getAbsolutePath())
}
inp.close();
}
}
}
1 minute to read
4. About This Task
The exportMarkdown task can be used to include markdown files into the documentation.
It scans the /src/docs
directory for markdown (*.md
) files and converts them into Asciidoc files. The converted files can then be referenced from within the /build
-folder.
4.1. Source
task exportMarkdown(
description: 'exports all markdown files to AsciiDoc',
group: 'docToolchain',
type: Copy
) {
from srcDir
include("**/*.md") //include only markdown files
includeEmptyDirs = false
rename(/(.+).md/, '$1.adoc') //rename all files from *.md to *.adoc
filter(Markdown2AdocFilter) // convert the content of the files
into targetDir
}
class Markdown2AdocFilter extends FilterReader {
Markdown2AdocFilter(Reader input) {
super(new StringReader(nl.jworks.markdown_to_asciidoc.Converter.convertMarkdownToAsciiDoc(input.text)))
}
}
4.2. exportOpenAPI
1 minute to read
About This Task
This task exports an OpenAPI Specification definition yaml file to a AsciiDoc document. Currently this task depends on OpenAPI Generator (v4.3.1) and its gradle plugin.
Configuration
// Configuration for OpenAPI related task
openApi = [:]
// 'specFile' is the name of OpenAPI specification yaml file. Tool expects this file inside working dir (as a filename or relative path with filename)
// 'infoUrl' and 'infoEmail' are specification metadata about further info related to the API. By default this values would be filled by openapi-generator plugin placeholders
//
openApi.with {
specFile = 'src/docs/petstore-v2.0.yaml' // i.e. 'petstore.yaml', 'src/doc/petstore.yaml'
infoUrl = 'https://my-api.company.com'
infoEmail = 'info@company.com'
}
Source
task exportOpenApi (
type: org.openapitools.generator.gradle.plugin.tasks.GenerateTask,
group: 'docToolchain',
description: 'exports OpenAPI specification to the asciidoc file') {
if (!specFile) {
logger.info("\n---> OpenAPI specification file not found in Config.groovy (https://doctoolchain.github.io/docToolchain/#_exportopenapi)")
return
} else {
logger.info("Found OpenAPI specification in Config.groovy")
}
outputs.upToDateWhen { false }
outputs.cacheIf { false }
generatorName = 'asciidoc'
outputDir = "${targetDir}/OpenAPI".toString()
inputSpec = "${docDir}/${specFile}" // plugin is not able to find file if inputPath is defined as '.'
logger.debug("\n=====================\nProject Config:\n=====================")
logger.debug("Docdir: ${docDir}")
logger.debug("Target: ${targetDir}")
logger.info("\n=====================\nOpenAPI Config:\n=====================")
logger.info("Specification file: ${specFile}")
logger.info("inputSpec: ${inputSpec}")
logger.info("outputDir: ${outputDir}\n")
additionalProperties = [
infoEmail:"${config.openApi.infoEmail}",
infoUrl:"${config.openApi.infoUrl}"
]
}
4.3. exportStructurizr
3 minutes to read
About This Task
Structurizr builds upon "diagrams as code", allowing you to create multiple diagrams from a single model, using a number of tools and programming languages. Structurizr is specifically designed to support the C4 model for visualising software architecture.
This task exports PlantUML (respective C4-PlantUML) diagrams from a software architecture model described with the Structurizr DSL. The generated diagrams can be integrated into the AsciiDoc documentation.
The software architecture model is integral part of the software architecture documentation.
As such we strongly suggest to put the Structurizr workspace file under revision control integrating it in the src/docs
directory.
The user would edit the software architecture model by this file.
This Structurizr DSL example below creates two diagrams, based upon a single set of elements and relationships.
workspace {
model {
user = person "User"
softwareSystem = softwareSystem "Software System" {
webapp = container "Web Application" {
user -> this "Uses"
}
container "Database" {
webapp -> this "Reads from and writes to"
}
}
}
views {
systemContext softwareSystem {
include *
autolayout lr
}
container softwareSystem {
include *
autolayout lr
}
theme default
}
}
And here the diagrams defined by the views in the example above rendered by the Structurizr web renderer.
Configuration
// Configuration for Structurizr related tasks
structurizr = [:]
structurizr.with {
// Configure where `exportStructurizr` looks for the Structurizr model.
workspace = {
// The directory in which the Structurizr workspace file is located.
// path = 'src/docs/structurizr'
// By default `exportStructurizr` looks for a file '${structurizr.workspace.path}/workspace.dsl'
// You can customize this behavior with 'filename'. Note that the workspace filename is provided without '.dsl' extension.
// filename = 'workspace'
}
export = {
// Directory for the exported diagrams.
//
// WARNING: Do not put manually created/changed files into this directory.
// If a valid Structurizr workspace file is found the directory is deleted before the diagram files are generated.
// outputPath = 'src/docs/structurizr/diagrams'
// Format of the exported diagrams. Defaults to 'plantuml' if the parameter is not provided.
//
// Following formats are supported:
// - 'plantuml': the same as 'plantuml/structurizr'
// - 'plantuml/structurizr': exports views to PlantUML
// - 'plantuml/c4plantuml': exports views to PlantUML with https://github.com/plantuml-stdlib/C4-PlantUML
// format = 'plantuml'
}
}
Example Configuration
The example below shows a possible directory layout with a src/docs/structurizr
directory containing the workspace.dsl
file.
. ├── docToolchainConfig.groovy ├── dtcw └── src └── docs ├── example │ └── example.adoc ├── images │ ├── some-pics-1.png │ └── some-pics-2.png └── structurizr └── workspace.dsl
The minimal configuration for the exportStructurizr
task in your docToolchainConfig.groovy
would look like
structurizr = [:]
structurizr.with {
workspace = {
path = 'src/docs/structurizr'
}
export = {
outputPath = "src/docs/structurizr/diagrams"
// The format is optional.
// format = 'plantuml'
}
}
You probably want to put the directory configured with structurizr.export.outputPath
into your .gitignore
file.
Do not put manually created/changed files into the directory provided with structurizr.export.outputPath .
If a valid Structurizr workspace file is provided the directory is deleted before the diagram files are generated.
|
Calling ./dtcw exportStructurizr
generates the diagrams in the structurizr.export.outputPath
directory.
├── docToolchainConfig.groovy ├── dtcw └── src └── docs ├── example │ └── example.adoc ├── images │ ├── some-pics-1.png │ └── some-pics-2.png └── structurizr ├── diagrams | ├── Container-001-key.puml | ├── Container-001.puml | ├── SystemContext-001-key.puml | └── SystemContext-001.puml └── workspace.dsl
Following our example the exported diagrams may be included in the Asciidoc document example.adoc
with
plantuml::../structurizr/diagrams/SystemContext-001.puml["structurizr-SystemContext",format=svg] plantuml::../structurizr/diagrams/Container-001.puml["structurizr-Container",format=svg]
Source
task exportStructurizr (
group: 'docToolchain',
description: 'exports the views of a Structurizr DSL file to diagramms'
) {
doLast {
logger.debug("\n=====================\nStructurizr Config - before property replacement:\n=====================")
logger.debug("structurizr.workspace.path: ${config.structurizr.workspace.path}")
logger.debug("structurizr.workspace.filename: ${config.structurizr.workspace.filename}")
logger.debug("structurizr.export.outputPath: ${config.structurizr.export.outputPath}")
logger.debug("structurizr.export.format: ${config.structurizr.export.format}")
// First we check the parameters
def workspacePath = findProperty("structurizr.workspace.path")?:config.structurizr.workspace.path
if (!workspacePath) {
throw new GradleException("Missing configuration parameter 'structurizr.workspace.path': please provide the path where the Structurizr workspace file is located.")
}
// If 'workspace.filename' is not provided, default to 'workspace' (without extension).
def filename = (findProperty("structurizr.workspace.filename")?:config.structurizr.workspace.filename)?:'workspace'
def outputPath = findProperty("structurizr.export.outputPath")?:config.structurizr.export.outputPath
if (!outputPath) {
throw new GradleException("Missing configuration parameter 'structurizr.export.outputPath': please provide the directory where the diagrams should be exported.")
}
// If 'format' parameter is not provided, default to 'plantuml'.
def format = (findProperty("structurizr.export.format")?:config.structurizr.export.format)?:'plantuml'
// Assure valid 'format' configuration parameter.
DiagramExporter exporter
switch(format) {
case 'plantuml':
case 'plantuml/structurizr':
exporter = new StructurizrPlantUMLExporter()
break
case 'plantuml/c4plantuml':
exporter = new C4PlantUMLExporter()
break
default:
throw new GradleException("unknown structurizr.format '${format}': supported formats are 'plantuml' and 'plantuml/c4plantuml'.")
}
logger.info("\n=====================\nStructurizr Config:\n=====================")
logger.info("structurizr.workspace.path: ${workspacePath}")
logger.info("structurizr.workspace.filename: ${filename}")
logger.info("structurizr.export.outputPath: ${outputPath}")
logger.info("structurizr.export.format: ${format}")
def workspaceFile = new File(docDir, workspacePath+'/'+filename+'.dsl')
logger.info("Parsing Structurizr workspace file '${workspaceFile}'")
StructurizrDslParser parser = new StructurizrDslParser()
// TODO: provide better error output in case parsing fails
parser.parse(workspaceFile)
Workspace workspace = parser.getWorkspace()
ThemeUtils.loadThemes(workspace)
// Cleanup existing diagrams and then make sure the directory exists where the diagrams are exported
new File(docDir, outputPath).deleteDir()
// Create a readme to clarify things
def readme = """This folder contains exported diagrams from a model described with Structurizr DSL.
Please note that these are generated files but reside in the `src`-folder in order to be versioned.
# Warning!
**The contents of this folder will be overwritten with each re-export!**
use `gradlew exportStructurizr` to re-export the diagrams
"""
new File(docDir, outputPath).mkdirs()
new File(docDir, outputPath+'/README.adoc').write(readme)
Collection<Diagram> diagrams = exporter.export(workspace);
diagrams.each { diagram ->
def file = new File(docDir, outputPath+"/"+diagram.key+'.'+diagram.getFileExtension())
file.write(diagram.definition)
if (diagram.legend) {
def legend = new File(docDir, outputPath+"/"+diagram.key+"-key."+diagram.getFileExtension())
legend.write(diagram.legend.definition)
}
}
}
}
4.4. htmlSanityCheck
1 minute to read
At a Glance
About This Task
This task invokes the htmlSanityCheck gradle plugin.
It is a standalone (batch- and command-line) HTML sanity checker whose role is to detect missing images, dead links and duplicated bookmarks.
In docToolchain, the htmlSanityCheck task ensures that generated HTML contains no missing links or other problems.
It is the last default task, and creates a report in build/report/htmlchecks/index.html
(see example below).
Further Reading and Resources
-
Read the Automated Quality-Checks blog post.
-
Visit https://github.com/aim42/htmlSanityCheck for more information about this task.
Source
htmlSanityCheck {
sourceDir = new File(config.htmlSanityCheck.sourceDir?targetDir+"/"+config.htmlSanityCheck.sourceDir:"$targetDir/html5")
// files to check - in Set-notation
//sourceDocuments = [ "one-file.html", "another-file.html", "index.html"]
// where to put results of sanityChecks...
checkingResultsDir = new File(config.htmlSanityCheck.checkingResultsDir?:checkingResultsPath)
// directory where the results written to in JUnit XML format
junitResultsDir = new File(config.htmlSanityCheck.junitResultsDir?:"$targetDir/test-results/htmlchecks")
// which statuscodes shall be interpreted as warning, error or success defaults to standard
httpSuccessCodes = config.htmlSanityCheck.httpSuccessCodes?:[]
httpWarningCodes = config.htmlSanityCheck.httpWarningCodes?:[]
httpErrorCodes = config.htmlSanityCheck.httpErrorCodes?:[]
// fail build on errors?
failOnErrors = config.htmlSanityCheck.failOnErrors?:false
logger.info "docToolchain> HSC sourceDir: ${sourceDir}"
logger.info "docToolchain> HSC checkingResultsDir: ${checkingResultsDir}"
}
4.5. dependencyUpdates
1 minute to read
About This Task
This task uses the Gradle versions plugin created by Ben Manes to check for outdated build dependencies. Use this task to keep all dependencies up to date.
If you discover newer version, it doesn’t mean that versions and dependencies will play nicely together. To ensure that everything works, we recommend the versions selected by docToolchain contributors. |
Further Reading and Resources
Read the Handle Dependency Updates the Easy Way blog post.
Unresolved directive in <stdin> - include::../010_manual/08_development.adoc[leveloffset=+1]
Unresolved directive in <stdin> - include::../010_manual/050_Frequently_asked_Questions.adoc[leveloffset=+1]
Unresolved directive in <stdin> - include::../010_manual/04_further_reading.adoc[leveloffset=+1]
Unresolved directive in <stdin> - include::../010_manual/100_config.adoc[leveloffset=+1]
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.