Documentation
¶
Overview ¶
Package textmate tokenizes source files using TextMate grammars, intended for syntax highlighting. Workflow: 1) Parse JSON grammar into an internal rule tree (MatchRule) 2) Tokenizer walks the rules and emits scoped tokens
Index ¶
Constants ¶
This section is empty.
Variables ¶
var (
ErrScopeName = errors.New("unexpected `scopeName`")
)
var GrammarExtension = ".tmLanguage.json"
GrammarExtension is the expected extension for grammar files (used for "source.*" includes).
Functions ¶
func CompareToken ¶
Types ¶
type Grammar ¶
type Grammar struct {
// contains filtered or unexported fields
}
Grammar is the compiled grammar with precompiled regexes and an executable rule tree.
func CompileGrammar ¶
func CompileGrammar(j GrammarJSON, dirname string, filename string) (*Grammar, error)
CompileGrammar compiles a decoded GrammarJSON into an executable Grammar. dirname decides where 'source.*' includes are resolved and defaults to `.`; filename is used to strictly validate j.ScopeName ("source.<basename>") and may be omitted.
func LoadGrammar ¶
LoadGrammar reads a *.tmLanguage.json, validates scopeName vs filename, and compiles it into a usable Grammar.
type GrammarJSON ¶
type GrammarJSON struct {
ScopeName string `json:"scopeName" plist:"scopeName"`
FileTypes []string `json:"fileTypes" plist:"fileTypes"`
FoldingStart string `json:"foldingStartMarker" plist:"foldingStartMarker"`
FoldingEnd string `json:"foldingStopMarker" plist:"foldingStopMarker"`
FirstLine string `json:"firstLineMatch" plist:"firstLineMatch"`
Repository map[string]RuleJSON `json:"repository" plist:"repository"`
Patterns []RuleJSON `json:"patterns" plist:"patterns"`
}
GrammarJSON mirrors the (subset of) TextMate JSON/Plist grammar on disk. It is decoded as-is and later compiled into Grammar.
type Mapper ¶
type Mapper [][]*Token
Mapper is an index→tokens structure. For each byte position, it stores the tokens covering that position. Useful for renderers that draw only when the set of active tokens changes.
type RuleJSON ¶
type RuleJSON struct {
Name string `json:"name" plist:"name"`
Match string `json:"match" plist:"match"`
Begin string `json:"begin" plist:"begin"`
End string `json:"end" plist:"end"`
Patterns []RuleJSON `json:"patterns" plist:"patterns"`
Captures map[string]RuleJSON `json:"captures" plist:"captures"`
BeginCaptures map[string]RuleJSON `json:"beginCaptures" plist:"beginCaptures"`
EndCaptures map[string]RuleJSON `json:"endCaptures" plist:"endCaptures"`
Include string `json:"include" plist:"include"`
}
RuleJSON is a raw grammar rule (as found in the JSON file). Note: capture groups are addressed by string indices "1","2",...
type StackItem ¶
type StackItem struct {
// contains filtered or unexported fields
}
StackItem is one frame on the parse stack carrying the active rule context.
func TokenizeLine ¶
func TokenizeLine(offset int, text string, start int, end int, top *StackItem, yield func(*Token)) (*StackItem, error)
TokenizeLine tokenizes text[start:end] within the given stack context. Always guarantees progress: if nothing matches, emits a 1-byte filler token (Scope:"").
type Token ¶
type Token struct {
// Scope given by grammar
Scope string
// Index in text of start
Start int
// Length of the token
Length int
// Depth, if tokens overlap each other, the token with a higher depth should be used
Depth int
}
Token describes a scoped span in the input. Tokens may overlap; render the token with the highest Depth at a position.