mirror of
https://github.com/kovetskiy/mark.git
synced 2026-05-03 14:47:38 +08:00
Compare commits
No commits in common. "master" and "v16.2.0" have entirely different histories.
@ -1,4 +1,4 @@
|
||||
FROM golang:1.26.2 AS builder
|
||||
FROM golang:1.26.1 AS builder
|
||||
ENV GOPATH="/go"
|
||||
WORKDIR /go/src/github.com/kovetskiy/mark
|
||||
COPY / .
|
||||
|
||||
36
README.md
36
README.md
@ -56,12 +56,11 @@ Also, optional following headers are supported:
|
||||
* blogpost: [Blog post](https://confluence.atlassian.com/doc/blog-posts-834222533.html) in `Space`. Cannot have `Parent`(s)
|
||||
|
||||
```markdown
|
||||
<!-- Content-Appearance: (full-width|fixed|default) -->
|
||||
<!-- Content-Appearance: (full-width|fixed) -->
|
||||
```
|
||||
|
||||
* (default) full-width: content will fill the full page width
|
||||
* fixed: content will be rendered in a fixed narrow view
|
||||
* default: sets the Confluence property value to `"default"`, which is the narrow layout as set by the Confluence UI. Note: `fixed` maps to a different Confluence property value and can cause misaligned page title and body content — use `default` instead for the narrow layout.
|
||||
|
||||
```markdown
|
||||
<!-- Sidebar: <h2>Test</h2> -->
|
||||
@ -877,11 +876,10 @@ GLOBAL OPTIONS:
|
||||
--space string use specified space key. If the space key is not specified, it must be set in the page metadata. [$MARK_SPACE]
|
||||
--parents string A list containing the parents of the document separated by parents-delimiter (default: '/'). These will be prepended to the ones defined in the document itself. [$MARK_PARENTS]
|
||||
--parents-delimiter string The delimiter used for the parents list (default: "/") [$MARK_PARENTS_DELIMITER]
|
||||
--content-appearance string default content appearance for pages without a Content-Appearance header. Possible values: full-width, fixed, default. [$MARK_CONTENT_APPEARANCE]
|
||||
--content-appearance string default content appearance for pages without a Content-Appearance header. Possible values: full-width, fixed. [$MARK_CONTENT_APPEARANCE]
|
||||
--mermaid-scale float defines the scaling factor for mermaid renderings. (default: 1) [$MARK_MERMAID_SCALE]
|
||||
--include-path string Path for shared includes, used as a fallback if the include doesn't exist in the current directory. [$MARK_INCLUDE_PATH]
|
||||
--changes-only Avoids re-uploading pages that haven't changed since the last run. [$MARK_CHANGES_ONLY]
|
||||
--preserve-comments Fetch and preserve inline comments on existing Confluence pages. [$MARK_PRESERVE_COMMENTS]
|
||||
--d2-scale float defines the scaling factor for d2 renderings. (default: 1) [$MARK_D2_SCALE]
|
||||
--features string [ --features string ] Enables optional features. Current features: d2, mermaid, mention, mkdocsadmonitions (default: "mermaid", "mention") [$MARK_FEATURES]
|
||||
--insecure-skip-tls-verify skip TLS certificate verification (useful for self-signed certificates) [$MARK_INSECURE_SKIP_TLS_VERIFY]
|
||||
@ -905,8 +903,6 @@ image-align = "center"
|
||||
|
||||
**NOTE**: Labels aren't supported when using `minor-edit`!
|
||||
|
||||
**NOTE**: See [Preserving Inline Comments](#preserving-inline-comments) for a detailed description of the `--preserve-comments` flag.
|
||||
|
||||
**NOTE**: The system specific locations are described in here:
|
||||
<https://pkg.go.dev/os#UserConfigDir>.
|
||||
Currently, these are:
|
||||
@ -977,34 +973,6 @@ mark -f "**/docs/*.md"
|
||||
|
||||
We recommend to lint your markdown files with [markdownlint-cli2](https://github.com/DavidAnson/markdownlint-cli2) before publishing them to confluence to catch any conversion errors early.
|
||||
|
||||
### Preserving Inline Comments
|
||||
|
||||
When collaborators leave inline comments on a Confluence page, updating the page via `mark` will normally erase those comments because the stored body is fully replaced. The `--preserve-comments` flag re-attaches inline comment markers to the new page body before uploading, so existing review threads survive updates.
|
||||
|
||||
```bash
|
||||
mark --preserve-comments -f docs/page.md
|
||||
```
|
||||
|
||||
Or via environment variable:
|
||||
|
||||
```bash
|
||||
MARK_PRESERVE_COMMENTS=true mark -f docs/page.md
|
||||
```
|
||||
|
||||
**How it works:**
|
||||
|
||||
1. Before uploading, `mark` fetches the current page body and all inline comment markers from the Confluence API.
|
||||
2. For each existing `<ac:inline-comment-marker>` tag it records the content wrapped by that marker plus a short context window immediately before the opening tag and immediately after the closing tag in the old body (not around the raw selection text, so the context is stable even when the marker wraps additional inline markup such as `<strong>`).
|
||||
3. It searches the new body for the same selected text and picks the occurrence whose surrounding context best matches the original (using Levenshtein distance), so the marker lands in the right place even if nearby text has shifted.
|
||||
4. The updated body—with all markers re-embedded—is then uploaded as normal.
|
||||
|
||||
**Limitations:**
|
||||
|
||||
* If the commented text was deleted from the document, the inline comment cannot be relocated and will be lost. `mark` logs a warning in this case.
|
||||
* Overlapping selections (two comments anchored to the same stretch of text) are detected; the earlier overlapping match is dropped with a warning, and the later one (higher byte offset) is kept, rather than producing malformed markup.
|
||||
* `--preserve-comments` is automatically skipped for newly created pages (there are no comments to preserve yet).
|
||||
* When combined with `--changes-only`, the comment-preservation API calls are skipped entirely on runs where the page content has not changed, avoiding unnecessary round-trips.
|
||||
|
||||
## Issues, Bugs & Contributions
|
||||
|
||||
I've started the project to solve my own problem and open sourced the solution so anyone who has a problem like me can solve it too.
|
||||
|
||||
@ -13,8 +13,7 @@ import (
|
||||
"net/url"
|
||||
"path"
|
||||
"path/filepath"
|
||||
"cmp"
|
||||
"slices"
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
@ -49,13 +48,6 @@ func ResolveAttachments(
|
||||
attachments []Attachment,
|
||||
) ([]Attachment, error) {
|
||||
for i := range attachments {
|
||||
// Skip checksum computation if already set (e.g. by mermaid/d2 renderers
|
||||
// which use the source content as the stable checksum rather than the
|
||||
// rendered PNG bytes, which may be non-deterministic across environments).
|
||||
if attachments[i].Checksum != "" {
|
||||
continue
|
||||
}
|
||||
|
||||
checksum, err := GetChecksum(bytes.NewReader(attachments[i].FileBytes))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("unable to get checksum for attachment %q: %w", attachments[i].Name, err)
|
||||
@ -243,8 +235,8 @@ func CompileAttachmentLinks(markdown []byte, attachments []Attachment) []byte {
|
||||
// attachments/a.jpg
|
||||
// attachments/a.jpg.jpg
|
||||
// so we replace longer and then shorter
|
||||
slices.SortStableFunc(replaces, func(a, b string) int {
|
||||
return cmp.Compare(len(b), len(a))
|
||||
sort.SliceStable(replaces, func(i, j int) bool {
|
||||
return len(replaces[i]) > len(replaces[j])
|
||||
})
|
||||
|
||||
for _, replace := range replaces {
|
||||
|
||||
@ -58,12 +58,6 @@ type PageInfo struct {
|
||||
Title string `json:"title"`
|
||||
} `json:"ancestors"`
|
||||
|
||||
Body struct {
|
||||
Storage struct {
|
||||
Value string `json:"value"`
|
||||
} `json:"storage"`
|
||||
} `json:"body"`
|
||||
|
||||
Links struct {
|
||||
Full string `json:"webui"`
|
||||
Base string `json:"-"` // Not from JSON; populated from response _links.base
|
||||
@ -91,29 +85,6 @@ type LabelInfo struct {
|
||||
Labels []Label `json:"results"`
|
||||
Size int `json:"number"`
|
||||
}
|
||||
|
||||
type InlineCommentProperties struct {
|
||||
OriginalSelection string `json:"originalSelection"`
|
||||
MarkerRef string `json:"markerRef"`
|
||||
}
|
||||
|
||||
type InlineCommentExtensions struct {
|
||||
Location string `json:"location"`
|
||||
InlineProperties InlineCommentProperties `json:"inlineProperties"`
|
||||
}
|
||||
|
||||
type InlineCommentResult struct {
|
||||
Extensions InlineCommentExtensions `json:"extensions"`
|
||||
}
|
||||
|
||||
type InlineComments struct {
|
||||
Links struct {
|
||||
Context string `json:"context"`
|
||||
Next string `json:"next"`
|
||||
} `json:"_links"`
|
||||
Results []InlineCommentResult `json:"results"`
|
||||
}
|
||||
|
||||
type form struct {
|
||||
buffer io.Reader
|
||||
writer *multipart.Writer
|
||||
@ -123,7 +94,7 @@ type tracer struct {
|
||||
prefix string
|
||||
}
|
||||
|
||||
func (tracer *tracer) Printf(format string, args ...any) {
|
||||
func (tracer *tracer) Printf(format string, args ...interface{}) {
|
||||
log.Trace().Msgf(tracer.prefix+" "+format, args...)
|
||||
}
|
||||
|
||||
@ -493,13 +464,9 @@ func (api *API) GetAttachments(pageID string) ([]AttachmentInfo, error) {
|
||||
}
|
||||
|
||||
func (api *API) GetPageByID(pageID string) (*PageInfo, error) {
|
||||
return api.GetPageByIDExpanded(pageID, "ancestors,version")
|
||||
}
|
||||
|
||||
func (api *API) GetPageByIDExpanded(pageID string, expand string) (*PageInfo, error) {
|
||||
request, err := api.rest.Res(
|
||||
"content/"+pageID, &PageInfo{},
|
||||
).Get(map[string]string{"expand": expand})
|
||||
).Get(map[string]string{"expand": "ancestors,version"})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@ -511,44 +478,6 @@ func (api *API) GetPageByIDExpanded(pageID string, expand string) (*PageInfo, er
|
||||
return request.Response.(*PageInfo), nil
|
||||
}
|
||||
|
||||
func (api *API) GetInlineComments(pageID string) (*InlineComments, error) {
|
||||
const pageSize = 100
|
||||
all := &InlineComments{}
|
||||
start := 0
|
||||
|
||||
for {
|
||||
result := &InlineComments{}
|
||||
request, err := api.rest.Res(
|
||||
"content/"+pageID+"/child/comment", result,
|
||||
).Get(map[string]string{
|
||||
"expand": "extensions.inlineProperties",
|
||||
"limit": fmt.Sprintf("%d", pageSize),
|
||||
"start": fmt.Sprintf("%d", start),
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if request.Raw.StatusCode != http.StatusOK {
|
||||
return nil, newErrorStatusNotOK(request)
|
||||
}
|
||||
|
||||
if all.Links.Context == "" {
|
||||
all.Links = result.Links
|
||||
}
|
||||
|
||||
all.Results = append(all.Results, result.Results...)
|
||||
|
||||
if len(result.Results) < pageSize || result.Links.Next == "" {
|
||||
break
|
||||
}
|
||||
|
||||
start += len(result.Results)
|
||||
}
|
||||
|
||||
return all, nil
|
||||
}
|
||||
|
||||
func (api *API) CreatePage(
|
||||
space string,
|
||||
pageType string,
|
||||
@ -556,21 +485,21 @@ func (api *API) CreatePage(
|
||||
title string,
|
||||
body string,
|
||||
) (*PageInfo, error) {
|
||||
payload := map[string]any{
|
||||
payload := map[string]interface{}{
|
||||
"type": pageType,
|
||||
"title": title,
|
||||
"space": map[string]any{
|
||||
"space": map[string]interface{}{
|
||||
"key": space,
|
||||
},
|
||||
"body": map[string]any{
|
||||
"storage": map[string]any{
|
||||
"body": map[string]interface{}{
|
||||
"storage": map[string]interface{}{
|
||||
"representation": "storage",
|
||||
"value": body,
|
||||
},
|
||||
},
|
||||
"metadata": map[string]any{
|
||||
"properties": map[string]any{
|
||||
"editor": map[string]any{
|
||||
"metadata": map[string]interface{}{
|
||||
"properties": map[string]interface{}{
|
||||
"editor": map[string]interface{}{
|
||||
"value": "v2",
|
||||
},
|
||||
},
|
||||
@ -578,7 +507,7 @@ func (api *API) CreatePage(
|
||||
}
|
||||
|
||||
if parent != nil {
|
||||
payload["ancestors"] = []map[string]any{
|
||||
payload["ancestors"] = []map[string]interface{}{
|
||||
{"id": parent.ID},
|
||||
}
|
||||
}
|
||||
@ -599,20 +528,20 @@ func (api *API) CreatePage(
|
||||
|
||||
func (api *API) UpdatePage(page *PageInfo, newContent string, minorEdit bool, versionMessage string, appearance string, emojiString string) error {
|
||||
nextPageVersion := page.Version.Number + 1
|
||||
oldAncestors := []map[string]any{}
|
||||
oldAncestors := []map[string]interface{}{}
|
||||
|
||||
if page.Type != "blogpost" && len(page.Ancestors) > 0 {
|
||||
// picking only the last one, which is required by confluence
|
||||
oldAncestors = []map[string]any{
|
||||
oldAncestors = []map[string]interface{}{
|
||||
{"id": page.Ancestors[len(page.Ancestors)-1].ID},
|
||||
}
|
||||
}
|
||||
|
||||
properties := map[string]any{
|
||||
properties := map[string]interface{}{
|
||||
// Fix to set full-width as has changed on Confluence APIs again.
|
||||
// https://jira.atlassian.com/browse/CONFCLOUD-65447
|
||||
//
|
||||
"content-appearance-published": map[string]any{
|
||||
"content-appearance-published": map[string]interface{}{
|
||||
"value": appearance,
|
||||
},
|
||||
// content-appearance-draft should not be set as this is impacted by
|
||||
@ -626,37 +555,37 @@ func (api *API) UpdatePage(page *PageInfo, newContent string, minorEdit bool, ve
|
||||
}
|
||||
unicodeHex := fmt.Sprintf("%x", r)
|
||||
|
||||
properties["emoji-title-draft"] = map[string]any{
|
||||
properties["emoji-title-draft"] = map[string]interface{}{
|
||||
"value": unicodeHex,
|
||||
}
|
||||
properties["emoji-title-published"] = map[string]any{
|
||||
properties["emoji-title-published"] = map[string]interface{}{
|
||||
"value": unicodeHex,
|
||||
}
|
||||
}
|
||||
|
||||
payload := map[string]any{
|
||||
payload := map[string]interface{}{
|
||||
"id": page.ID,
|
||||
"type": page.Type,
|
||||
"title": page.Title,
|
||||
"version": map[string]any{
|
||||
"version": map[string]interface{}{
|
||||
"number": nextPageVersion,
|
||||
"minorEdit": minorEdit,
|
||||
"message": versionMessage,
|
||||
},
|
||||
"ancestors": oldAncestors,
|
||||
"body": map[string]any{
|
||||
"storage": map[string]any{
|
||||
"body": map[string]interface{}{
|
||||
"storage": map[string]interface{}{
|
||||
"value": newContent,
|
||||
"representation": "storage",
|
||||
},
|
||||
},
|
||||
"metadata": map[string]any{
|
||||
"metadata": map[string]interface{}{
|
||||
"properties": properties,
|
||||
},
|
||||
}
|
||||
|
||||
request, err := api.rest.Res(
|
||||
"content/"+page.ID, &map[string]any{},
|
||||
"content/"+page.ID, &map[string]interface{}{},
|
||||
).Put(payload)
|
||||
if err != nil {
|
||||
return err
|
||||
@ -671,10 +600,10 @@ func (api *API) UpdatePage(page *PageInfo, newContent string, minorEdit bool, ve
|
||||
|
||||
func (api *API) AddPageLabels(page *PageInfo, newLabels []string) (*LabelInfo, error) {
|
||||
|
||||
labels := []map[string]any{}
|
||||
labels := []map[string]interface{}{}
|
||||
for _, label := range newLabels {
|
||||
if label != "" {
|
||||
item := map[string]any{
|
||||
item := map[string]interface{}{
|
||||
"prefix": "global",
|
||||
"name": label,
|
||||
}
|
||||
@ -836,17 +765,17 @@ func (api *API) RestrictPageUpdatesCloud(
|
||||
user = currentUser
|
||||
}
|
||||
|
||||
var result any
|
||||
var result interface{}
|
||||
|
||||
request, err := api.rest.
|
||||
Res("content").
|
||||
Id(page.ID).
|
||||
Res("restriction", &result).
|
||||
Post([]map[string]any{
|
||||
Post([]map[string]interface{}{
|
||||
{
|
||||
"operation": "update",
|
||||
"restrictions": map[string]any{
|
||||
"user": []map[string]any{
|
||||
"restrictions": map[string]interface{}{
|
||||
"user": []map[string]interface{}{
|
||||
{
|
||||
"type": "known",
|
||||
"accountId": user.AccountID,
|
||||
@ -872,15 +801,15 @@ func (api *API) RestrictPageUpdatesServer(
|
||||
) error {
|
||||
var (
|
||||
err error
|
||||
result any
|
||||
result interface{}
|
||||
)
|
||||
|
||||
request, err := api.json.Res(
|
||||
"setContentPermissions", &result,
|
||||
).Post([]any{
|
||||
).Post([]interface{}{
|
||||
page.ID,
|
||||
"Edit",
|
||||
[]map[string]any{
|
||||
[]map[string]interface{}{
|
||||
{
|
||||
"userName": allowedUser,
|
||||
},
|
||||
|
||||
6
go.mod
6
go.mod
@ -6,16 +6,16 @@ require (
|
||||
github.com/bmatcuk/doublestar/v4 v4.10.0
|
||||
github.com/chromedp/cdproto v0.0.0-20260321001828-e3e3800016bc
|
||||
github.com/chromedp/chromedp v0.15.1
|
||||
github.com/dreampuf/mermaid.go v0.2.0
|
||||
github.com/dreampuf/mermaid.go v0.1.0
|
||||
github.com/kovetskiy/gopencils v0.0.0-20250404051442-0b776066936a
|
||||
github.com/rs/zerolog v1.35.1
|
||||
github.com/rs/zerolog v1.35.0
|
||||
github.com/stefanfritsch/goldmark-admonitions v1.1.1
|
||||
github.com/stretchr/testify v1.11.1
|
||||
github.com/urfave/cli-altsrc/v3 v3.1.0
|
||||
github.com/urfave/cli/v3 v3.8.0
|
||||
github.com/yuin/goldmark v1.8.2
|
||||
go.yaml.in/yaml/v3 v3.0.4
|
||||
golang.org/x/text v0.36.0
|
||||
golang.org/x/text v0.35.0
|
||||
oss.terrastruct.com/d2 v0.7.1
|
||||
oss.terrastruct.com/util-go v0.0.0-20250213174338-243d8661088a
|
||||
)
|
||||
|
||||
12
go.sum
12
go.sum
@ -31,8 +31,8 @@ github.com/dlclark/regexp2 v1.11.4 h1:rPYF9/LECdNymJufQKmri9gV604RvvABwgOA8un7yA
|
||||
github.com/dlclark/regexp2 v1.11.4/go.mod h1:DHkYz0B9wPfa6wondMfaivmHpzrQ3v9q8cnmRbL6yW8=
|
||||
github.com/dop251/goja v0.0.0-20240927123429-241b342198c2 h1:Ux9RXuPQmTB4C1MKagNLme0krvq8ulewfor+ORO/QL4=
|
||||
github.com/dop251/goja v0.0.0-20240927123429-241b342198c2/go.mod h1:MxLav0peU43GgvwVgNbLAj1s/bSGboKkhuULvq/7hx4=
|
||||
github.com/dreampuf/mermaid.go v0.2.0 h1:dghdUGw7zoeISIHRMOzHdQ/A7gpHv+dKtVO/ntPXFeo=
|
||||
github.com/dreampuf/mermaid.go v0.2.0/go.mod h1:9jSzOKzV59UX8Gc9EJ5xuiJeldHpTEmKxF2pwu42r2g=
|
||||
github.com/dreampuf/mermaid.go v0.1.0 h1:GGD2B6Eowkjjz6ATsvY3ldVSJwU7lS5ddeIKYsm1Yas=
|
||||
github.com/dreampuf/mermaid.go v0.1.0/go.mod h1:9jSzOKzV59UX8Gc9EJ5xuiJeldHpTEmKxF2pwu42r2g=
|
||||
github.com/go-json-experiment/json v0.0.0-20260214004413-d219187c3433 h1:vymEbVwYFP/L05h5TKQxvkXoKxNvTpjxYKdF1Nlwuao=
|
||||
github.com/go-json-experiment/json v0.0.0-20260214004413-d219187c3433/go.mod h1:tphK2c80bpPhMOI4v6bIc2xWywPfbqi1Z06+RcrMkDg=
|
||||
github.com/go-sourcemap/sourcemap v2.1.4+incompatible h1:a+iTbH5auLKxaNwQFg0B+TCYl6lbukKPc7b5x0n1s6Q=
|
||||
@ -78,8 +78,8 @@ github.com/rivo/uniseg v0.4.7/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUc
|
||||
github.com/rogpeppe/go-internal v1.9.0/go.mod h1:WtVeX8xhTBvf0smdhujwtBcq4Qrzq/fJaraNFVN+nFs=
|
||||
github.com/rogpeppe/go-internal v1.11.0 h1:cWPaGQEPrBb5/AsnsZesgZZ9yb1OQ+GOISoDNXVBh4M=
|
||||
github.com/rogpeppe/go-internal v1.11.0/go.mod h1:ddIwULY96R17DhadqLgMfk9H9tvdUzkipdSkR5nkCZA=
|
||||
github.com/rs/zerolog v1.35.1 h1:m7xQeoiLIiV0BCEY4Hs+j2NG4Gp2o2KPKmhnnLiazKI=
|
||||
github.com/rs/zerolog v1.35.1/go.mod h1:EjML9kdfa/RMA7h/6z6pYmq1ykOuA8/mjWaEvGI+jcw=
|
||||
github.com/rs/zerolog v1.35.0 h1:VD0ykx7HMiMJytqINBsKcbLS+BJ4WYjz+05us+LRTdI=
|
||||
github.com/rs/zerolog v1.35.0/go.mod h1:EjML9kdfa/RMA7h/6z6pYmq1ykOuA8/mjWaEvGI+jcw=
|
||||
github.com/stefanfritsch/goldmark-admonitions v1.1.1 h1:SncsICdQrIYYaq02Ta+zyc9gNmMfYqQH2qwLSCJYxA4=
|
||||
github.com/stefanfritsch/goldmark-admonitions v1.1.1/go.mod h1:cOZK5O0gE6eWfpxTdjGUmeONW2IL9j3Zujv3KlZWlLo=
|
||||
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
|
||||
@ -134,8 +134,8 @@ golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
|
||||
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
|
||||
golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
|
||||
golang.org/x/text v0.36.0 h1:JfKh3XmcRPqZPKevfXVpI1wXPTqbkE5f7JA92a55Yxg=
|
||||
golang.org/x/text v0.36.0/go.mod h1:NIdBknypM8iqVmPiuco0Dh6P5Jcdk8lJL0CUebqK164=
|
||||
golang.org/x/text v0.35.0 h1:JOVx6vVDFokkpaq1AEptVzLTpDe9KGpj5tR4/X+ybL8=
|
||||
golang.org/x/text v0.35.0/go.mod h1:khi/HExzZJ2pGnjenulevKNX1W67CUy0AsXcNubPGCA=
|
||||
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
|
||||
|
||||
@ -75,7 +75,7 @@ func ProcessIncludes(
|
||||
templates *template.Template,
|
||||
) (*template.Template, []byte, bool, error) {
|
||||
formatVardump := func(
|
||||
data map[string]any,
|
||||
data map[string]interface{},
|
||||
) string {
|
||||
var parts []string
|
||||
for key, value := range data {
|
||||
@ -105,7 +105,7 @@ func ProcessIncludes(
|
||||
left = string(groups[3])
|
||||
right = string(groups[4])
|
||||
config = groups[5]
|
||||
data = map[string]any{}
|
||||
data = map[string]interface{}{}
|
||||
)
|
||||
|
||||
if delimsNone == "none" {
|
||||
|
||||
@ -37,7 +37,7 @@ func (macro *Macro) Apply(
|
||||
content = macro.Regexp.ReplaceAllFunc(
|
||||
content,
|
||||
func(match []byte) []byte {
|
||||
config := map[string]any{}
|
||||
config := map[string]interface{}{}
|
||||
|
||||
err = yaml.Unmarshal([]byte(macro.Config), &config)
|
||||
if err != nil {
|
||||
@ -63,21 +63,21 @@ func (macro *Macro) Apply(
|
||||
return content, err
|
||||
}
|
||||
|
||||
func (macro *Macro) configure(node any, groups [][]byte) any {
|
||||
func (macro *Macro) configure(node interface{}, groups [][]byte) interface{} {
|
||||
switch node := node.(type) {
|
||||
case map[any]any:
|
||||
case map[interface{}]interface{}:
|
||||
for key, value := range node {
|
||||
node[key] = macro.configure(value, groups)
|
||||
}
|
||||
|
||||
return node
|
||||
case map[string]any:
|
||||
case map[string]interface{}:
|
||||
for key, value := range node {
|
||||
node[key] = macro.configure(value, groups)
|
||||
}
|
||||
|
||||
return node
|
||||
case []any:
|
||||
case []interface{}:
|
||||
for key, value := range node {
|
||||
node[key] = macro.configure(value, groups)
|
||||
}
|
||||
@ -126,7 +126,7 @@ func ExtractMacros(
|
||||
var macro Macro
|
||||
|
||||
if strings.HasPrefix(template, "#") {
|
||||
cfg := map[string]any{}
|
||||
cfg := map[string]interface{}{}
|
||||
|
||||
err = yaml.Unmarshal([]byte(config), &cfg)
|
||||
if err != nil {
|
||||
@ -170,7 +170,7 @@ func ExtractMacros(
|
||||
macro.Config = config
|
||||
|
||||
log.Trace().
|
||||
Interface("vardump", map[string]any{
|
||||
Interface("vardump", map[string]interface{}{
|
||||
"expr": expr,
|
||||
"template": template,
|
||||
"config": macro.Config,
|
||||
|
||||
363
mark.go
363
mark.go
@ -4,9 +4,7 @@ import (
|
||||
"bytes"
|
||||
"crypto/sha1"
|
||||
"encoding/hex"
|
||||
"errors"
|
||||
"fmt"
|
||||
stdhtml "html"
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
@ -14,7 +12,6 @@ import (
|
||||
"slices"
|
||||
"strings"
|
||||
"time"
|
||||
"unicode/utf8"
|
||||
|
||||
"github.com/bmatcuk/doublestar/v4"
|
||||
"github.com/kovetskiy/mark/v16/attachment"
|
||||
@ -30,8 +27,6 @@ import (
|
||||
"github.com/rs/zerolog/log"
|
||||
)
|
||||
|
||||
var markerRegex = regexp.MustCompile(`(?s)<ac:inline-comment-marker ac:ref="([^"]+)">(.*?)</ac:inline-comment-marker>`)
|
||||
|
||||
// Config holds all configuration options for running Mark.
|
||||
type Config struct {
|
||||
// Connection settings
|
||||
@ -59,11 +54,10 @@ type Config struct {
|
||||
ContentAppearance string
|
||||
|
||||
// Page updates
|
||||
MinorEdit bool
|
||||
VersionMessage string
|
||||
EditLock bool
|
||||
ChangesOnly bool
|
||||
PreserveComments bool
|
||||
MinorEdit bool
|
||||
VersionMessage string
|
||||
EditLock bool
|
||||
ChangesOnly bool
|
||||
|
||||
// Rendering
|
||||
DropH1 bool
|
||||
@ -103,7 +97,7 @@ func Run(config Config) error {
|
||||
if config.CI {
|
||||
log.Warn().Msg(msg)
|
||||
} else {
|
||||
return errors.New(msg)
|
||||
return fmt.Errorf("%s", msg)
|
||||
}
|
||||
}
|
||||
|
||||
@ -287,7 +281,6 @@ func ProcessFile(file string, api *confluence.API, config Config) (*confluence.P
|
||||
}
|
||||
|
||||
var target *confluence.PageInfo
|
||||
var pageCreated bool
|
||||
|
||||
if meta != nil {
|
||||
parent, pg, err := page.ResolvePage(false, api, meta)
|
||||
@ -304,7 +297,6 @@ func ProcessFile(file string, api *confluence.API, config Config) (*confluence.P
|
||||
// conflict that can occur when attempting to update a page just
|
||||
// after it was created. See issues/139.
|
||||
time.Sleep(1 * time.Second)
|
||||
pageCreated = true
|
||||
}
|
||||
|
||||
target = pg
|
||||
@ -422,27 +414,6 @@ func ProcessFile(file string, api *confluence.API, config Config) (*confluence.P
|
||||
finalVersionMessage = config.VersionMessage
|
||||
}
|
||||
|
||||
// Only fetch the old body and inline comments when we know the page will
|
||||
// actually be updated. This avoids unnecessary API round-trips for no-op
|
||||
// runs (e.g. when --changes-only determines the content is unchanged).
|
||||
if shouldUpdatePage && config.PreserveComments && !pageCreated {
|
||||
pg, err := api.GetPageByIDExpanded(target.ID, "ancestors,version,body.storage")
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("unable to retrieve page body for comments: %w", err)
|
||||
}
|
||||
target = pg
|
||||
|
||||
comments, err := api.GetInlineComments(target.ID)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("unable to retrieve inline comments: %w", err)
|
||||
}
|
||||
|
||||
html, err = mergeComments(html, target.Body.Storage.Value, comments)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("unable to merge inline comments: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
if shouldUpdatePage {
|
||||
err = api.UpdatePage(
|
||||
target,
|
||||
@ -559,327 +530,3 @@ func sha1Hash(input string) string {
|
||||
h.Write([]byte(input))
|
||||
return hex.EncodeToString(h.Sum(nil))
|
||||
}
|
||||
|
||||
// htmlEscapeText escapes only the characters that Confluence storage HTML
|
||||
// always encodes in text nodes (&, <, >). Unlike html.EscapeString it does NOT
|
||||
// escape single-quotes or double-quotes, because those are frequently left
|
||||
// unescaped inside text nodes by the Confluence editor and by mark's own
|
||||
// renderer, so escaping them would prevent the selection-search from finding
|
||||
// a valid match.
|
||||
var htmlTextReplacer = strings.NewReplacer("&", "&", "<", "<", ">", ">")
|
||||
|
||||
func htmlEscapeText(s string) string {
|
||||
return htmlTextReplacer.Replace(s)
|
||||
}
|
||||
|
||||
// truncateSelection returns a truncated preview of s for use in log messages,
|
||||
// capped at maxRunes runes, with an ellipsis appended when trimmed.
|
||||
func truncateSelection(s string, maxRunes int) string {
|
||||
runes := []rune(s)
|
||||
if len(runes) <= maxRunes {
|
||||
return s
|
||||
}
|
||||
return string(runes[:maxRunes]) + "…"
|
||||
}
|
||||
|
||||
// contextBefore returns up to maxBytes of s ending at byteEnd, trimmed
|
||||
// forward to the nearest valid UTF-8 rune start so the slice is never
|
||||
// split across a multi-byte sequence.
|
||||
func contextBefore(s string, byteEnd, maxBytes int) string {
|
||||
start := byteEnd - maxBytes
|
||||
if start < 0 {
|
||||
start = 0
|
||||
}
|
||||
for start < byteEnd && !utf8.RuneStart(s[start]) {
|
||||
start++
|
||||
}
|
||||
return s[start:byteEnd]
|
||||
}
|
||||
|
||||
// contextAfter returns up to maxBytes of s starting at byteStart, trimmed
|
||||
// back to the nearest valid UTF-8 rune start so the slice is never split
|
||||
// across a multi-byte sequence.
|
||||
func contextAfter(s string, byteStart, maxBytes int) string {
|
||||
end := byteStart + maxBytes
|
||||
if end >= len(s) {
|
||||
return s[byteStart:]
|
||||
}
|
||||
for end > byteStart && !utf8.RuneStart(s[end]) {
|
||||
end--
|
||||
}
|
||||
return s[byteStart:end]
|
||||
}
|
||||
|
||||
func levenshteinDistance(s1, s2 string) int {
|
||||
r1 := []rune(s1)
|
||||
r2 := []rune(s2)
|
||||
|
||||
if len(r1) == 0 {
|
||||
return len(r2)
|
||||
}
|
||||
if len(r2) == 0 {
|
||||
return len(r1)
|
||||
}
|
||||
|
||||
// Use two rolling rows instead of a full matrix to reduce allocations
|
||||
// from O(m×n) to O(n). Swap r1/r2 so r2 is the shorter string, keeping
|
||||
// the row width (len(r2)+1) as small as possible.
|
||||
if len(r1) < len(r2) {
|
||||
r1, r2 = r2, r1
|
||||
}
|
||||
|
||||
prev := make([]int, len(r2)+1)
|
||||
curr := make([]int, len(r2)+1)
|
||||
|
||||
for j := range prev {
|
||||
prev[j] = j
|
||||
}
|
||||
|
||||
for i := 1; i <= len(r1); i++ {
|
||||
curr[0] = i
|
||||
for j := 1; j <= len(r2); j++ {
|
||||
cost := 0
|
||||
if r1[i-1] != r2[j-1] {
|
||||
cost = 1
|
||||
}
|
||||
curr[j] = min(
|
||||
prev[j]+1, // deletion
|
||||
curr[j-1]+1, // insertion
|
||||
prev[j-1]+cost, // substitution
|
||||
)
|
||||
}
|
||||
prev, curr = curr, prev
|
||||
}
|
||||
return prev[len(r2)]
|
||||
}
|
||||
|
||||
type commentContext struct {
|
||||
before string
|
||||
after string
|
||||
}
|
||||
|
||||
// mergeComments re-embeds inline comment markers from the Confluence API into
|
||||
// newBody (the updated storage HTML about to be uploaded). It extracts context
|
||||
// from each existing marker in oldBody and uses Levenshtein distance to
|
||||
// relocate each marker to the best-matching position in newBody, so comment
|
||||
// threads survive page edits even when the surrounding text has shifted.
|
||||
//
|
||||
// At most maxCandidates occurrences of each selection are evaluated with
|
||||
// Levenshtein distance; further occurrences are ignored to bound CPU cost on
|
||||
// pages where a selection is short or very common.
|
||||
const maxCandidates = 100
|
||||
|
||||
// contextWindowBytes is the number of bytes of surrounding text captured as
|
||||
// context around each inline-comment marker. It is used both when extracting
|
||||
// context from oldBody and when scoring candidates in newBody.
|
||||
const contextWindowBytes = 100
|
||||
|
||||
func mergeComments(newBody string, oldBody string, comments *confluence.InlineComments) (string, error) {
|
||||
if comments == nil {
|
||||
return newBody, nil
|
||||
}
|
||||
// 1. Extract context for each comment from oldBody
|
||||
contexts := make(map[string]commentContext)
|
||||
matches := markerRegex.FindAllStringSubmatchIndex(oldBody, -1)
|
||||
for _, match := range matches {
|
||||
ref := oldBody[match[2]:match[3]]
|
||||
// context around the tag
|
||||
before := contextBefore(oldBody, match[0], contextWindowBytes)
|
||||
after := contextAfter(oldBody, match[1], contextWindowBytes)
|
||||
contexts[ref] = commentContext{
|
||||
before: before,
|
||||
after: after,
|
||||
}
|
||||
}
|
||||
|
||||
type replacement struct {
|
||||
start int
|
||||
end int
|
||||
ref string
|
||||
selection string
|
||||
}
|
||||
var replacements []replacement
|
||||
seenRefs := make(map[string]bool)
|
||||
|
||||
for _, comment := range comments.Results {
|
||||
if comment.Extensions.Location != "inline" {
|
||||
log.Debug().
|
||||
Str("location", comment.Extensions.Location).
|
||||
Str("ref", comment.Extensions.InlineProperties.MarkerRef).
|
||||
Msg("comment ignored during inline marker merge: not an inline comment")
|
||||
continue
|
||||
}
|
||||
|
||||
ref := comment.Extensions.InlineProperties.MarkerRef
|
||||
selection := comment.Extensions.InlineProperties.OriginalSelection
|
||||
|
||||
if seenRefs[ref] {
|
||||
// Multiple results share the same MarkerRef (e.g. threaded replies).
|
||||
// The marker only needs to be inserted once; skip duplicates.
|
||||
continue
|
||||
}
|
||||
// Mark ref as seen immediately so subsequent results for the same ref
|
||||
// (threaded replies) are always deduplicated, even if this one is dropped.
|
||||
seenRefs[ref] = true
|
||||
|
||||
if selection == "" {
|
||||
log.Warn().
|
||||
Str("ref", ref).
|
||||
Msg("inline comment skipped: original selection is empty; comment will be lost")
|
||||
continue
|
||||
}
|
||||
|
||||
ctx, hasCtx := contexts[ref]
|
||||
|
||||
// Build the list of forms to search for in newBody. The escaped form
|
||||
// is tried first (normal XML text nodes). The raw form is appended as a
|
||||
// fallback for text inside CDATA-backed macro bodies (e.g. ac:code),
|
||||
// where < and > are stored unescaped inside <![CDATA[...]]>.
|
||||
escapedSelection := htmlEscapeText(selection)
|
||||
searchForms := []string{escapedSelection}
|
||||
if selection != escapedSelection {
|
||||
searchForms = append(searchForms, selection)
|
||||
}
|
||||
|
||||
var bestStart = -1
|
||||
var bestEnd = -1
|
||||
var minDistance = 1000000
|
||||
|
||||
// Iterate over search forms; stop as soon as we have a definitive best.
|
||||
candidates := 0
|
||||
stopSearch := false
|
||||
for _, form := range searchForms {
|
||||
if stopSearch {
|
||||
break
|
||||
}
|
||||
currentPos := 0
|
||||
for {
|
||||
index := strings.Index(newBody[currentPos:], form)
|
||||
if index == -1 {
|
||||
break
|
||||
}
|
||||
start := currentPos + index
|
||||
end := start + len(form)
|
||||
|
||||
// Skip candidates that start or end in the middle of a multi-byte
|
||||
// UTF-8 rune; such a match would produce invalid UTF-8 output.
|
||||
if !utf8.RuneStart(newBody[start]) || (end < len(newBody) && !utf8.RuneStart(newBody[end])) {
|
||||
currentPos = start + 1
|
||||
continue
|
||||
}
|
||||
|
||||
candidates++
|
||||
if candidates > maxCandidates {
|
||||
stopSearch = true
|
||||
break
|
||||
}
|
||||
|
||||
if !hasCtx {
|
||||
// No context available; use the first occurrence.
|
||||
bestStart = start
|
||||
bestEnd = end
|
||||
stopSearch = true
|
||||
break
|
||||
}
|
||||
|
||||
newBefore := contextBefore(newBody, start, contextWindowBytes)
|
||||
newAfter := contextAfter(newBody, end, contextWindowBytes)
|
||||
|
||||
// Fast path: exact context match is the best possible result.
|
||||
if newBefore == ctx.before && newAfter == ctx.after {
|
||||
bestStart = start
|
||||
bestEnd = end
|
||||
stopSearch = true
|
||||
break
|
||||
}
|
||||
|
||||
// Lower-bound pruning: Levenshtein distance is at least the
|
||||
// absolute difference in rune counts. Use rune counts (not byte
|
||||
// lengths) to match the unit levenshteinDistance operates on,
|
||||
// avoiding false skips for multibyte UTF-8 content.
|
||||
lbBefore := utf8.RuneCountInString(ctx.before) - utf8.RuneCountInString(newBefore)
|
||||
if lbBefore < 0 {
|
||||
lbBefore = -lbBefore
|
||||
}
|
||||
lbAfter := utf8.RuneCountInString(ctx.after) - utf8.RuneCountInString(newAfter)
|
||||
if lbAfter < 0 {
|
||||
lbAfter = -lbAfter
|
||||
}
|
||||
if lbBefore+lbAfter >= minDistance {
|
||||
currentPos = start + 1
|
||||
continue
|
||||
}
|
||||
|
||||
distance := levenshteinDistance(ctx.before, newBefore) + levenshteinDistance(ctx.after, newAfter)
|
||||
|
||||
if distance < minDistance {
|
||||
minDistance = distance
|
||||
bestStart = start
|
||||
bestEnd = end
|
||||
}
|
||||
|
||||
currentPos = start + 1
|
||||
}
|
||||
}
|
||||
|
||||
if bestStart != -1 {
|
||||
replacements = append(replacements, replacement{
|
||||
start: bestStart,
|
||||
end: bestEnd,
|
||||
ref: ref,
|
||||
selection: selection,
|
||||
})
|
||||
} else {
|
||||
log.Warn().
|
||||
Str("ref", ref).
|
||||
Str("selection_preview", truncateSelection(selection, 50)).
|
||||
Msg("inline comment dropped: selected text not found in new body; comment will be lost")
|
||||
}
|
||||
}
|
||||
|
||||
// Sort replacements from back to front to avoid offset issues.
|
||||
// Use a stable sort with ref as a tie-breaker so the ordering is
|
||||
// deterministic when two markers resolve to the same start offset.
|
||||
slices.SortStableFunc(replacements, func(a, b replacement) int {
|
||||
if a.start != b.start {
|
||||
return b.start - a.start
|
||||
}
|
||||
if a.ref < b.ref {
|
||||
return -1
|
||||
}
|
||||
if a.ref > b.ref {
|
||||
return 1
|
||||
}
|
||||
return 0
|
||||
})
|
||||
|
||||
// Apply replacements back-to-front. Track the minimum start of any
|
||||
// applied replacement so that overlapping candidates (whose end exceeds
|
||||
// that boundary) are dropped rather than producing nested or malformed
|
||||
// <ac:inline-comment-marker> tags.
|
||||
minAppliedStart := len(newBody)
|
||||
for _, r := range replacements {
|
||||
if r.end > minAppliedStart {
|
||||
// This replacement overlaps with an already-applied one.
|
||||
// Drop it and warn so the user knows the comment was skipped.
|
||||
log.Warn().
|
||||
Str("ref", r.ref).
|
||||
Str("selection_preview", truncateSelection(r.selection, 50)).
|
||||
Int("start", r.start).
|
||||
Int("end", r.end).
|
||||
Int("conflicting_start", minAppliedStart).
|
||||
Msg("inline comment marker dropped: selection overlaps an already-placed marker")
|
||||
continue
|
||||
}
|
||||
minAppliedStart = r.start
|
||||
selection := newBody[r.start:r.end]
|
||||
withComment := fmt.Sprintf(
|
||||
`<ac:inline-comment-marker ac:ref="%s">%s</ac:inline-comment-marker>`,
|
||||
stdhtml.EscapeString(r.ref),
|
||||
selection,
|
||||
)
|
||||
newBody = newBody[:r.start] + withComment + newBody[r.end:]
|
||||
}
|
||||
|
||||
return newBody, nil
|
||||
}
|
||||
|
||||
369
mark_test.go
369
mark_test.go
@ -1,369 +0,0 @@
|
||||
package mark
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/kovetskiy/mark/v16/confluence"
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Helper function unit tests
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
func TestTruncateSelection(t *testing.T) {
|
||||
assert.Equal(t, "hello", truncateSelection("hello", 10))
|
||||
assert.Equal(t, "hello", truncateSelection("hello", 5))
|
||||
assert.Equal(t, "hell…", truncateSelection("hello", 4))
|
||||
assert.Equal(t, "", truncateSelection("", 5))
|
||||
// Multibyte runes count as single units.
|
||||
assert.Equal(t, "世界…", truncateSelection("世界 is the world", 2))
|
||||
}
|
||||
|
||||
func TestLevenshteinDistance(t *testing.T) {
|
||||
tests := []struct {
|
||||
s1, s2 string
|
||||
want int
|
||||
}{
|
||||
{"", "", 0},
|
||||
{"abc", "", 3},
|
||||
{"", "abc", 3},
|
||||
{"abc", "abc", 0},
|
||||
{"abc", "axc", 1}, // one substitution
|
||||
{"abc", "ab", 1}, // one deletion
|
||||
{"ab", "abc", 1}, // one insertion
|
||||
{"kitten", "sitting", 3},
|
||||
// Multibyte: é is one rune, so distance from "héllo" to "hello" is 1.
|
||||
{"héllo", "hello", 1},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.s1+"/"+tt.s2, func(t *testing.T) {
|
||||
assert.Equal(t, tt.want, levenshteinDistance(tt.s1, tt.s2))
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestContextBefore(t *testing.T) {
|
||||
// Basic cases.
|
||||
assert.Equal(t, "", contextBefore("hello", 0, 10))
|
||||
assert.Equal(t, "hello", contextBefore("hello", 5, 10))
|
||||
assert.Equal(t, "llo", contextBefore("hello", 5, 3))
|
||||
|
||||
// "héllo" is 6 bytes (h=1, é=2, l=1, l=1, o=1).
|
||||
// maxBytes=4 → raw start=2, which lands mid-rune (é's continuation byte).
|
||||
// Should advance to byte 3 (first 'l').
|
||||
assert.Equal(t, "llo", contextBefore("héllo", 6, 4))
|
||||
}
|
||||
|
||||
func TestContextAfter(t *testing.T) {
|
||||
// Basic cases.
|
||||
assert.Equal(t, "", contextAfter("hello", 5, 10))
|
||||
assert.Equal(t, "hello", contextAfter("hello", 0, 10))
|
||||
assert.Equal(t, "hel", contextAfter("hello", 0, 3))
|
||||
|
||||
// "héllo" is 6 bytes. contextAfter(s, 0, 2) → raw end=2 (é's continuation
|
||||
// byte), which is not a rune start. Should back up to 1, returning just "h".
|
||||
assert.Equal(t, "h", contextAfter("héllo", 0, 2))
|
||||
}
|
||||
|
||||
// makeComments builds an InlineComments value from alternating
|
||||
// (selection, markerRef) pairs, all with location "inline".
|
||||
func makeComments(pairs ...string) *confluence.InlineComments {
|
||||
c := &confluence.InlineComments{}
|
||||
for i := 0; i+1 < len(pairs); i += 2 {
|
||||
selection, ref := pairs[i], pairs[i+1]
|
||||
c.Results = append(c.Results, confluence.InlineCommentResult{
|
||||
Extensions: confluence.InlineCommentExtensions{
|
||||
Location: "inline",
|
||||
InlineProperties: confluence.InlineCommentProperties{
|
||||
OriginalSelection: selection,
|
||||
MarkerRef: ref,
|
||||
},
|
||||
},
|
||||
})
|
||||
}
|
||||
return c
|
||||
}
|
||||
|
||||
func TestMergeComments(t *testing.T) {
|
||||
body := "<p>Hello world</p>"
|
||||
oldBody := `<p>Hello <ac:inline-comment-marker ac:ref="uuid-123">world</ac:inline-comment-marker></p>`
|
||||
comments := makeComments("world", "uuid-123")
|
||||
|
||||
result, err := mergeComments(body, oldBody, comments)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, `<p>Hello <ac:inline-comment-marker ac:ref="uuid-123">world</ac:inline-comment-marker></p>`, result)
|
||||
}
|
||||
|
||||
func TestMergeComments_Escaping(t *testing.T) {
|
||||
body := "<p>Hello & world</p>"
|
||||
oldBody := `<p>Hello <ac:inline-comment-marker ac:ref="uuid-456">&</ac:inline-comment-marker> world</p>`
|
||||
comments := makeComments("&", "uuid-456")
|
||||
|
||||
result, err := mergeComments(body, oldBody, comments)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, `<p>Hello <ac:inline-comment-marker ac:ref="uuid-456">&</ac:inline-comment-marker> world</p>`, result)
|
||||
}
|
||||
|
||||
func TestMergeComments_Disambiguation(t *testing.T) {
|
||||
body := "<p>Item one. Item two. Item one.</p>"
|
||||
// Comment is on the second "Item one."
|
||||
oldBody := `<p>Item one. Item two. <ac:inline-comment-marker ac:ref="uuid-1">Item one.</ac:inline-comment-marker></p>`
|
||||
comments := makeComments("Item one.", "uuid-1")
|
||||
|
||||
result, err := mergeComments(body, oldBody, comments)
|
||||
assert.NoError(t, err)
|
||||
// Context should correctly pick the second occurrence
|
||||
assert.Equal(t, `<p>Item one. Item two. <ac:inline-comment-marker ac:ref="uuid-1">Item one.</ac:inline-comment-marker></p>`, result)
|
||||
}
|
||||
|
||||
// TestMergeComments_SelectionMissing verifies that a comment whose selection
|
||||
// no longer appears in the new body is dropped without returning an error or panicking.
|
||||
// A warning is logged so the user knows the comment was not relocated.
|
||||
func TestMergeComments_SelectionMissing(t *testing.T) {
|
||||
body := "<p>Completely different content</p>"
|
||||
oldBody := `<p><ac:inline-comment-marker ac:ref="uuid-gone">old text</ac:inline-comment-marker></p>`
|
||||
comments := makeComments("old text", "uuid-gone")
|
||||
|
||||
result, err := mergeComments(body, oldBody, comments)
|
||||
assert.NoError(t, err)
|
||||
// Comment is dropped; body is returned unchanged.
|
||||
assert.Equal(t, body, result)
|
||||
}
|
||||
|
||||
// TestMergeComments_OverlappingSelections verifies that when two comments
|
||||
// reference overlapping text regions the later one (by position) is kept and
|
||||
// the earlier overlapping one is dropped rather than corrupting the body.
|
||||
func TestMergeComments_OverlappingSelections(t *testing.T) {
|
||||
body := "<p>foo bar baz</p>"
|
||||
// Neither comment has a marker in oldBody, so no positional context is
|
||||
// available; the algorithm falls back to a plain string search.
|
||||
oldBody := "<p>foo bar baz</p>"
|
||||
// "foo bar" starts at 3, ends at 10; "bar baz" starts at 7, ends at 14.
|
||||
// They overlap on "bar". The later match (uuid-B at position 7) wins.
|
||||
comments := makeComments("foo bar", "uuid-A", "bar baz", "uuid-B")
|
||||
|
||||
result, err := mergeComments(body, oldBody, comments)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, `<p>foo <ac:inline-comment-marker ac:ref="uuid-B">bar baz</ac:inline-comment-marker></p>`, result)
|
||||
}
|
||||
|
||||
// TestMergeComments_NilComments verifies that a nil comments pointer is
|
||||
// handled gracefully and the new body is returned unchanged.
|
||||
func TestMergeComments_NilComments(t *testing.T) {
|
||||
body := "<p>Hello world</p>"
|
||||
result, err := mergeComments(body, "", nil)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, body, result)
|
||||
}
|
||||
|
||||
// TestMergeComments_HTMLEntities verifies that selections containing HTML
|
||||
// entities (<, >) are matched correctly. The API returns raw (unescaped)
|
||||
// text for OriginalSelection; htmlEscapeText encodes &, < and > to their
|
||||
// entity forms before searching.
|
||||
func TestMergeComments_HTMLEntities(t *testing.T) {
|
||||
body := `<p>Hello <world> it's me</p>`
|
||||
oldBody := `<p>Hello <ac:inline-comment-marker ac:ref="uuid-ent"><world></ac:inline-comment-marker> it's me</p>`
|
||||
// The API returns the raw (unescaped) selection text.
|
||||
comments := makeComments("<world>", "uuid-ent")
|
||||
|
||||
result, err := mergeComments(body, oldBody, comments)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, `<p>Hello <ac:inline-comment-marker ac:ref="uuid-ent"><world></ac:inline-comment-marker> it's me</p>`, result)
|
||||
}
|
||||
|
||||
// TestMergeComments_ApostropheEncoded verifies the known limitation: when a
|
||||
// selection includes an apostrophe that Confluence stores as the numeric
|
||||
// entity ' in the page body, mergeComments cannot locate the selection
|
||||
// (htmlEscapeText does not encode ' to ') and the comment is dropped with
|
||||
// a warning rather than panicking or producing invalid output.
|
||||
func TestMergeComments_ApostropheEncoded(t *testing.T) {
|
||||
// New body uses ' entity (as Confluence sometimes stores apostrophes).
|
||||
body := `<p>Hello <world> it's me</p>`
|
||||
// Old body has the comment marker around a selection that includes an apostrophe.
|
||||
oldBody := `<p>Hello <ac:inline-comment-marker ac:ref="uuid-apos-enc"><world> it's</ac:inline-comment-marker> me</p>`
|
||||
// The API returns the raw unescaped selection including a literal apostrophe.
|
||||
comments := makeComments("<world> it's", "uuid-apos-enc")
|
||||
|
||||
result, err := mergeComments(body, oldBody, comments)
|
||||
assert.NoError(t, err)
|
||||
// The comment is dropped (body unchanged) because htmlEscapeText("it's")
|
||||
// produces "it's", which doesn't match "it's" in the new body.
|
||||
assert.Equal(t, body, result)
|
||||
}
|
||||
|
||||
// TestMergeComments_ApostropheSelection verifies that a selection containing a
|
||||
// literal apostrophe is found when the new body also contains a literal
|
||||
// apostrophe (as mark's renderer typically emits). This exercises the
|
||||
// htmlEscapeText path which intentionally does not encode ' or ".
|
||||
func TestMergeComments_ApostropheSelection(t *testing.T) {
|
||||
body := `<p>Hello it's a test</p>`
|
||||
oldBody := `<p>Hello <ac:inline-comment-marker ac:ref="uuid-apos">it's</ac:inline-comment-marker> a test</p>`
|
||||
// The API returns the raw (unescaped) selection text with a literal apostrophe.
|
||||
comments := makeComments("it's", "uuid-apos")
|
||||
|
||||
result, err := mergeComments(body, oldBody, comments)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, `<p>Hello <ac:inline-comment-marker ac:ref="uuid-apos">it's</ac:inline-comment-marker> a test</p>`, result)
|
||||
}
|
||||
|
||||
|
||||
// TestMergeComments_NestedTags verifies that a marker whose stored content
|
||||
// contains nested inline tags (e.g. <strong>) is still recognised by
|
||||
// markerRegex and the comment is correctly relocated into the new body.
|
||||
func TestMergeComments_NestedTags(t *testing.T) {
|
||||
// The new body contains plain bold text (no marker yet).
|
||||
body := "<p>Hello <strong>world</strong></p>"
|
||||
// The old body already has the marker wrapping the bold tag.
|
||||
oldBody := `<p>Hello <ac:inline-comment-marker ac:ref="uuid-nested"><strong>world</strong></ac:inline-comment-marker></p>`
|
||||
// The API returns the raw selected text without markup.
|
||||
comments := makeComments("world", "uuid-nested")
|
||||
|
||||
result, err := mergeComments(body, oldBody, comments)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, `<p>Hello <strong><ac:inline-comment-marker ac:ref="uuid-nested">world</ac:inline-comment-marker></strong></p>`, result)
|
||||
}
|
||||
|
||||
// TestMergeComments_EmptySelection verifies that a comment with an empty
|
||||
// OriginalSelection is skipped without panicking and the body is returned
|
||||
// unchanged.
|
||||
func TestMergeComments_EmptySelection(t *testing.T) {
|
||||
body := "<p>Hello world</p>"
|
||||
comments := makeComments("", "uuid-empty")
|
||||
|
||||
result, err := mergeComments(body, body, comments)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, body, result)
|
||||
}
|
||||
|
||||
// TestMergeComments_DuplicateMarkerRef verifies that multiple comment results
|
||||
// sharing the same MarkerRef (e.g. threaded replies) produce exactly one
|
||||
// <ac:inline-comment-marker> insertion rather than nested duplicates.
|
||||
func TestMergeComments_DuplicateMarkerRef(t *testing.T) {
|
||||
body := "<p>Hello world</p>"
|
||||
oldBody := `<p>Hello <ac:inline-comment-marker ac:ref="uuid-dup">world</ac:inline-comment-marker></p>`
|
||||
// Two results with identical ref — simulates threaded replies.
|
||||
comments := makeComments("world", "uuid-dup", "world", "uuid-dup")
|
||||
|
||||
result, err := mergeComments(body, oldBody, comments)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, `<p>Hello <ac:inline-comment-marker ac:ref="uuid-dup">world</ac:inline-comment-marker></p>`, result)
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Additional mergeComments scenario tests
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
// TestMergeComments_MultipleComments verifies that two non-overlapping comments
|
||||
// are both correctly re-embedded via back-to-front replacement.
|
||||
func TestMergeComments_MultipleComments(t *testing.T) {
|
||||
body := "<p>Hello world and foo bar</p>"
|
||||
oldBody := `<p>Hello <ac:inline-comment-marker ac:ref="uuid-1">world</ac:inline-comment-marker> and foo <ac:inline-comment-marker ac:ref="uuid-2">bar</ac:inline-comment-marker></p>`
|
||||
comments := makeComments("world", "uuid-1", "bar", "uuid-2")
|
||||
|
||||
result, err := mergeComments(body, oldBody, comments)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, `<p>Hello <ac:inline-comment-marker ac:ref="uuid-1">world</ac:inline-comment-marker> and foo <ac:inline-comment-marker ac:ref="uuid-2">bar</ac:inline-comment-marker></p>`, result)
|
||||
}
|
||||
|
||||
// TestMergeComments_EmptyResults verifies that an InlineComments value with a
|
||||
// non-nil but empty Results slice is handled gracefully.
|
||||
func TestMergeComments_EmptyResults(t *testing.T) {
|
||||
body := "<p>Hello world</p>"
|
||||
result, err := mergeComments(body, body, &confluence.InlineComments{})
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, body, result)
|
||||
}
|
||||
|
||||
// TestMergeComments_NonInlineLocation verifies that page-level comments
|
||||
// (location != "inline") are silently skipped and the body is unchanged.
|
||||
func TestMergeComments_NonInlineLocation(t *testing.T) {
|
||||
body := "<p>Hello world</p>"
|
||||
comments := &confluence.InlineComments{
|
||||
Results: []confluence.InlineCommentResult{
|
||||
{
|
||||
Extensions: confluence.InlineCommentExtensions{
|
||||
Location: "page",
|
||||
InlineProperties: confluence.InlineCommentProperties{
|
||||
OriginalSelection: "Hello",
|
||||
MarkerRef: "uuid-page",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
result, err := mergeComments(body, body, comments)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, body, result)
|
||||
}
|
||||
|
||||
// TestMergeComments_NoContext verifies that when a comment's MarkerRef has no
|
||||
// corresponding marker in oldBody (no context available) the first occurrence
|
||||
// of the selection in the new body is used.
|
||||
func TestMergeComments_NoContext(t *testing.T) {
|
||||
body := "<p>foo bar foo</p>"
|
||||
oldBody := "<p>foo bar foo</p>" // no markers → no context
|
||||
comments := makeComments("foo", "uuid-noctx")
|
||||
|
||||
result, err := mergeComments(body, oldBody, comments)
|
||||
assert.NoError(t, err)
|
||||
// First occurrence of "foo" is at position 3.
|
||||
assert.Equal(t, `<p><ac:inline-comment-marker ac:ref="uuid-noctx">foo</ac:inline-comment-marker> bar foo</p>`, result)
|
||||
}
|
||||
|
||||
// TestMergeComments_UTF8 verifies that selections and bodies containing
|
||||
// multibyte UTF-8 characters are handled correctly.
|
||||
func TestMergeComments_UTF8(t *testing.T) {
|
||||
body := "<p>こんにちは世界</p>"
|
||||
oldBody := `<p>こんにちは<ac:inline-comment-marker ac:ref="uuid-jp">世界</ac:inline-comment-marker></p>`
|
||||
comments := makeComments("世界", "uuid-jp")
|
||||
|
||||
result, err := mergeComments(body, oldBody, comments)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, `<p>こんにちは<ac:inline-comment-marker ac:ref="uuid-jp">世界</ac:inline-comment-marker></p>`, result)
|
||||
}
|
||||
|
||||
// TestMergeComments_SelectionWithQuotes verifies that a selection containing
|
||||
// apostrophes or double-quotes is found correctly in the new body even though
|
||||
// html.EscapeString would encode those characters. Only &, <, > should be
|
||||
// escaped when searching.
|
||||
func TestMergeComments_SelectionWithQuotes(t *testing.T) {
|
||||
body := `<p>It's a "test" page</p>`
|
||||
oldBody := `<p>It's a <ac:inline-comment-marker ac:ref="uuid-q">"test"</ac:inline-comment-marker> page</p>`
|
||||
comments := makeComments(`"test"`, "uuid-q")
|
||||
|
||||
result, err := mergeComments(body, oldBody, comments)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, `<p>It's a <ac:inline-comment-marker ac:ref="uuid-q">"test"</ac:inline-comment-marker> page</p>`, result)
|
||||
}
|
||||
|
||||
// TestMergeComments_DuplicateMarkerRefDropped verifies that when multiple
|
||||
// comment results share the same MarkerRef and the selection cannot be found,
|
||||
// only a single warning is emitted (not one per result).
|
||||
func TestMergeComments_DuplicateMarkerRefDropped(t *testing.T) {
|
||||
body := "<p>Hello world</p>"
|
||||
// Duplicate refs, but selection "gone" is not present in body or oldBody.
|
||||
comments := makeComments("gone", "uuid-dup2", "gone", "uuid-dup2")
|
||||
|
||||
result, err := mergeComments(body, body, comments)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, body, result) // body unchanged, single warning logged
|
||||
}
|
||||
|
||||
// TestMergeComments_CDATASelection verifies that a selection inside a
|
||||
// CDATA-backed macro body (e.g. ac:code) is matched even though < and > are
|
||||
// stored as raw characters rather than HTML entities. The raw form is tried as
|
||||
// a fallback when the escaped form is not found.
|
||||
func TestMergeComments_CDATASelection(t *testing.T) {
|
||||
// New body contains a code macro with CDATA — raw < and > in the content.
|
||||
body := `<ac:structured-macro ac:name="code"><ac:plain-text-body><![CDATA[func foo() { return <nil> }]]></ac:plain-text-body></ac:structured-macro>`
|
||||
// Old body has the marker around the raw selection inside CDATA.
|
||||
oldBody := `<ac:structured-macro ac:name="code"><ac:plain-text-body><![CDATA[func foo() { return <ac:inline-comment-marker ac:ref="uuid-cdata"><nil></ac:inline-comment-marker> }]]></ac:plain-text-body></ac:structured-macro>`
|
||||
// The API returns the raw (unescaped) selection.
|
||||
comments := makeComments("<nil>", "uuid-cdata")
|
||||
|
||||
result, err := mergeComments(body, oldBody, comments)
|
||||
assert.NoError(t, err)
|
||||
// The raw selection "<nil>" should be found and wrapped with a marker.
|
||||
assert.Equal(t, `<ac:structured-macro ac:name="code"><ac:plain-text-body><![CDATA[func foo() { return <ac:inline-comment-marker ac:ref="uuid-cdata"><nil></ac:inline-comment-marker> }]]></ac:plain-text-body></ac:structured-macro>`, result)
|
||||
}
|
||||
@ -46,7 +46,6 @@ type Meta struct {
|
||||
const (
|
||||
FullWidthContentAppearance = "full-width"
|
||||
FixedContentAppearance = "fixed"
|
||||
DefaultContentAppearance = "default"
|
||||
)
|
||||
|
||||
var (
|
||||
@ -123,12 +122,9 @@ func ExtractMeta(data []byte, spaceFromCli string, titleFromH1 bool, titleFromFi
|
||||
continue
|
||||
|
||||
case ContentAppearance:
|
||||
switch strings.TrimSpace(value) {
|
||||
case FixedContentAppearance:
|
||||
if strings.TrimSpace(value) == FixedContentAppearance {
|
||||
meta.ContentAppearance = FixedContentAppearance
|
||||
case DefaultContentAppearance:
|
||||
meta.ContentAppearance = DefaultContentAppearance
|
||||
default:
|
||||
} else {
|
||||
meta.ContentAppearance = FullWidthContentAppearance
|
||||
}
|
||||
|
||||
@ -170,12 +166,9 @@ func ExtractMeta(data []byte, spaceFromCli string, titleFromH1 bool, titleFromFi
|
||||
|
||||
// Use the global content appearance flag if the header is not set in the document
|
||||
if meta != nil && defaultContentAppearance != "" && meta.ContentAppearance == "" {
|
||||
switch strings.TrimSpace(defaultContentAppearance) {
|
||||
case FixedContentAppearance:
|
||||
if strings.TrimSpace(defaultContentAppearance) == FixedContentAppearance {
|
||||
meta.ContentAppearance = FixedContentAppearance
|
||||
case DefaultContentAppearance:
|
||||
meta.ContentAppearance = DefaultContentAppearance
|
||||
default:
|
||||
} else {
|
||||
meta.ContentAppearance = FullWidthContentAppearance
|
||||
}
|
||||
} else if meta != nil && meta.ContentAppearance == "" {
|
||||
|
||||
@ -88,22 +88,4 @@ func TestExtractMetaContentAppearance(t *testing.T) {
|
||||
assert.NotNil(t, meta)
|
||||
assert.Equal(t, FullWidthContentAppearance, meta.ContentAppearance)
|
||||
})
|
||||
|
||||
t.Run("default appearance via cli flag", func(t *testing.T) {
|
||||
data := []byte("<!-- Space: DOC -->\n<!-- Title: Example -->\n\nbody\n")
|
||||
|
||||
meta, _, err := ExtractMeta(data, "", false, false, "", nil, false, DefaultContentAppearance)
|
||||
assert.NoError(t, err)
|
||||
assert.NotNil(t, meta)
|
||||
assert.Equal(t, DefaultContentAppearance, meta.ContentAppearance)
|
||||
})
|
||||
|
||||
t.Run("default appearance via header", func(t *testing.T) {
|
||||
data := []byte("<!-- Space: DOC -->\n<!-- Title: Example -->\n<!-- Content-Appearance: default -->\n\nbody\n")
|
||||
|
||||
meta, _, err := ExtractMeta(data, "", false, false, "", nil, false, "")
|
||||
assert.NoError(t, err)
|
||||
assert.NotNil(t, meta)
|
||||
assert.Equal(t, DefaultContentAppearance, meta.ContentAppearance)
|
||||
})
|
||||
}
|
||||
|
||||
21
util/cli.go
21
util/cli.go
@ -7,9 +7,9 @@ import (
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
mark "github.com/kovetskiy/mark/v16"
|
||||
"github.com/rs/zerolog"
|
||||
"github.com/rs/zerolog/log"
|
||||
mark "github.com/kovetskiy/mark/v16"
|
||||
"github.com/urfave/cli/v3"
|
||||
)
|
||||
|
||||
@ -23,7 +23,7 @@ func RunMark(ctx context.Context, cmd *cli.Command) error {
|
||||
output := zerolog.ConsoleWriter{
|
||||
Out: os.Stderr,
|
||||
TimeFormat: "2006-01-02 15:04:05.000",
|
||||
FormatLevel: func(i any) string {
|
||||
FormatLevel: func(i interface{}) string {
|
||||
var l string
|
||||
if ll, ok := i.(string); ok {
|
||||
switch ll {
|
||||
@ -49,16 +49,16 @@ func RunMark(ctx context.Context, cmd *cli.Command) error {
|
||||
}
|
||||
return l
|
||||
},
|
||||
FormatFieldName: func(i any) string {
|
||||
FormatFieldName: func(i interface{}) string {
|
||||
return ""
|
||||
},
|
||||
FormatFieldValue: func(i any) string {
|
||||
FormatFieldValue: func(i interface{}) string {
|
||||
return fmt.Sprintf("%s", i)
|
||||
},
|
||||
FormatErrFieldName: func(i any) string {
|
||||
FormatErrFieldName: func(i interface{}) string {
|
||||
return ""
|
||||
},
|
||||
FormatErrFieldValue: func(i any) string {
|
||||
FormatErrFieldValue: func(i interface{}) string {
|
||||
return fmt.Sprintf("%s", i)
|
||||
},
|
||||
}
|
||||
@ -111,11 +111,10 @@ func RunMark(ctx context.Context, cmd *cli.Command) error {
|
||||
TitleAppendGeneratedHash: cmd.Bool("title-append-generated-hash"),
|
||||
ContentAppearance: cmd.String("content-appearance"),
|
||||
|
||||
MinorEdit: cmd.Bool("minor-edit"),
|
||||
VersionMessage: cmd.String("version-message"),
|
||||
EditLock: cmd.Bool("edit-lock"),
|
||||
ChangesOnly: cmd.Bool("changes-only"),
|
||||
PreserveComments: cmd.Bool("preserve-comments"),
|
||||
MinorEdit: cmd.Bool("minor-edit"),
|
||||
VersionMessage: cmd.String("version-message"),
|
||||
EditLock: cmd.Bool("edit-lock"),
|
||||
ChangesOnly: cmd.Bool("changes-only"),
|
||||
|
||||
DropH1: cmd.Bool("drop-h1"),
|
||||
StripLinebreaks: cmd.Bool("strip-linebreaks"),
|
||||
|
||||
@ -14,7 +14,7 @@ func NewErrorHandler(continueOnError bool) *FatalErrorHandler {
|
||||
}
|
||||
}
|
||||
|
||||
func (h *FatalErrorHandler) Handle(err error, format string, args ...any) {
|
||||
func (h *FatalErrorHandler) Handle(err error, format string, args ...interface{}) {
|
||||
|
||||
if err == nil {
|
||||
if h.ContinueOnError {
|
||||
|
||||
@ -169,7 +169,7 @@ var Flags = []cli.Flag{
|
||||
&cli.StringFlag{
|
||||
Name: "content-appearance",
|
||||
Value: "",
|
||||
Usage: "default content appearance for pages without a Content-Appearance header. Possible values: full-width, fixed, default.",
|
||||
Usage: "default content appearance for pages without a Content-Appearance header. Possible values: full-width, fixed.",
|
||||
Sources: cli.NewValueSourceChain(
|
||||
cli.EnvVar("MARK_CONTENT_APPEARANCE"),
|
||||
altsrctoml.TOML("content-appearance", altsrc.NewStringPtrSourcer(&filename)),
|
||||
@ -194,12 +194,6 @@ var Flags = []cli.Flag{
|
||||
Usage: "Avoids re-uploading pages that haven't changed since the last run.",
|
||||
Sources: cli.NewValueSourceChain(cli.EnvVar("MARK_CHANGES_ONLY"), altsrctoml.TOML("changes-only", altsrc.NewStringPtrSourcer(&filename))),
|
||||
},
|
||||
&cli.BoolFlag{
|
||||
Name: "preserve-comments",
|
||||
Value: false,
|
||||
Usage: "Fetch and preserve inline comments on existing Confluence pages.",
|
||||
Sources: cli.NewValueSourceChain(cli.EnvVar("MARK_PRESERVE_COMMENTS"), altsrctoml.TOML("preserve-comments", altsrc.NewStringPtrSourcer(&filename))),
|
||||
},
|
||||
&cli.FloatFlag{
|
||||
Name: "d2-scale",
|
||||
Value: 1.0,
|
||||
@ -236,11 +230,11 @@ func CheckFlags(context context.Context, command *cli.Command) (context.Context,
|
||||
contentAppearance := strings.TrimSpace(command.String("content-appearance"))
|
||||
if contentAppearance != "" {
|
||||
switch contentAppearance {
|
||||
case "full-width", "fixed", "default":
|
||||
case "full-width", "fixed":
|
||||
// ok
|
||||
default:
|
||||
return context, fmt.Errorf(
|
||||
"invalid value for --content-appearance: %q (expected: full-width, fixed, or default)",
|
||||
"invalid value for --content-appearance: %q (expected: full-width or fixed)",
|
||||
contentAppearance,
|
||||
)
|
||||
}
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user