Grammalecte  Check-in [c784e6eb04]

Overview
Comment:[core][build][fr] merge rg: GC ENGINE REWRITTEN (tokenization and rules merged as graphs)
Downloads: Tarball | ZIP archive | SQL archive
Timelines: family | ancestors | descendants | both | trunk | fr | core | build | new_feature | major_change
Files: files | file ages | folders
SHA3-256: c784e6eb04b3c1ce0c921dff4bb7ed16342b98c00d3327b7982b81f2eec7c183
User & Date: olr on 2018-09-19 18:36:33
Other Links: manifest | tags
Context
2018-09-19
18:42
[server] useless imports check-in: 7181f546b3 user: olr tags: server, trunk
18:36
[core][build][fr] merge rg: GC ENGINE REWRITTEN (tokenization and rules merged as graphs) check-in: c784e6eb04 user: olr tags: build, core, fr, major_change, new_feature, trunk
16:04
[fr][bug] lexicographe: détection de la fin d’un lemme Closed-Leaf check-in: e9f97a8a3d user: olr tags: fr, rg
2018-08-20
09:48
[fr] màj du dictionnaire Hunspell check-in: 9247b2625d user: olr tags: fr, trunk
Changes

Modified compile_rules.py from [e584887b25] to [1079fd02f6].




1
2
3
4
5
6

7
8
9
10
11
12
13
14
15
16
17
18
19
20













21

22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
..
95
96
97
98
99
100
101

102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117









118
119
120
121
122
123
124
125
126
127
128
129
130
131
...
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
...
223
224
225
226
227
228
229















230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
...
270
271
272
273
274
275
276

277
278
279
280
281
282
283
284
285
286
287
288
289

290

291
292
293
294
295
296
297
298
299
300
301
302
303







304

305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333

334
335
336
337
338
339
340
341
342
343
344
345

346
347

348
349
350
351
352
353

354
355
356
357
358
359
360
...
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
...
413
414
415
416
417
418
419

420
421
422
423
424
425
426
...
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445

446
447
448

449
450
451
452
453
454
455

456
457
458
459
460
461
462

463
464

465
466

467
468
469
470
471

472
473
474






















475

476
477

478
479
480
481
482
483
484
...
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533

534
535

536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554




555




import re
import traceback
import json

import compile_rules_js_convert as jsconv



dDEF = {}
lFUNCTIONS = []

aRULESET = set()     # set of rule-ids to check if there is several rules with the same id
nRULEWITHOUTNAME = 0

dJSREGEXES = {}

sWORDLIMITLEFT  = r"(?<![\w.,–-])"   # r"(?<![-.,—])\b"  seems slower
sWORDLIMITRIGHT = r"(?![\w–-])"      # r"\b(?!-—)"       seems slower















def prepareFunction (s):

    s = s.replace("__also__", "bCondMemo")
    s = s.replace("__else__", "not bCondMemo")
    s = re.sub(r"isStart *\(\)", 'before("^ *$|, *$")', s)
    s = re.sub(r"isRealStart *\(\)", 'before("^ *$")', s)
    s = re.sub(r"isStart0 *\(\)", 'before0("^ *$|, *$")', s)
    s = re.sub(r"isRealStart0 *\(\)", 'before0("^ *$")', s)
    s = re.sub(r"isEnd *\(\)", 'after("^ *$|^,")', s)
    s = re.sub(r"isRealEnd *\(\)", 'after("^ *$")', s)
    s = re.sub(r"isEnd0 *\(\)", 'after0("^ *$|^,")', s)
    s = re.sub(r"isRealEnd0 *\(\)", 'after0("^ *$")', s)
    s = re.sub(r"(select|exclude)[(][\\](\d+)", '\\1(dDA, m.start(\\2), m.group(\\2)', s)
    s = re.sub(r"define[(][\\](\d+)", 'define(dDA, m.start(\\1)', s)
    s = re.sub(r"(morph|morphex|displayInfo)[(][\\](\d+)", '\\1((m.start(\\2), m.group(\\2))', s)
    s = re.sub(r"(morph|morphex|displayInfo)[(]", '\\1(dDA, ', s)
    s = re.sub(r"(sugg\w+|switch\w+)\(@", '\\1(m.group(i[4])', s)
    s = re.sub(r"word\(\s*1\b", 'nextword1(s, m.end()', s)                                  # word(1)
    s = re.sub(r"word\(\s*-1\b", 'prevword1(s, m.start()', s)                               # word(-1)
    s = re.sub(r"word\(\s*(\d)", 'nextword(s, m.end(), \\1', s)                             # word(n)
    s = re.sub(r"word\(\s*-(\d)", 'prevword(s, m.start(), \\1', s)                          # word(-n)
    s = re.sub(r"before\(\s*", 'look(s[:m.start()], ', s)                                   # before(s)
    s = re.sub(r"after\(\s*", 'look(s[m.end():], ', s)                                      # after(s)
    s = re.sub(r"textarea\(\s*", 'look(s, ', s)                                             # textarea(s)
    s = re.sub(r"before_chk1\(\s*", 'look_chk1(dDA, s[:m.start()], 0, ', s)                 # before_chk1(s)
    s = re.sub(r"after_chk1\(\s*", 'look_chk1(dDA, s[m.end():], m.end(), ', s)              # after_chk1(s)
    s = re.sub(r"textarea_chk1\(\s*", 'look_chk1(dDA, s, 0, ', s)                           # textarea_chk1(s)
    s = re.sub(r"/0", 'sx[m.start():m.end()]', s)                                           # /0
    s = re.sub(r"before0\(\s*", 'look(sx[:m.start()], ', s)                                 # before0(s)
    s = re.sub(r"after0\(\s*", 'look(sx[m.end():], ', s)                                    # after0(s)
    s = re.sub(r"textarea0\(\s*", 'look(sx, ', s)                                           # textarea0(s)
    s = re.sub(r"before0_chk1\(\s*", 'look_chk1(dDA, sx[:m.start()], 0, ', s)               # before0_chk1(s)
    s = re.sub(r"after0_chk1\(\s*", 'look_chk1(dDA, sx[m.end():], m.end(), ', s)            # after0_chk1(s)
    s = re.sub(r"textarea0_chk1\(\s*", 'look_chk1(dDA, sx, 0, ', s)                         # textarea0_chk1(s)
    s = re.sub(r"isEndOfNG\(\s*\)", 'isEndOfNG(dDA, s[m.end():], m.end())', s)              # isEndOfNG(s)
    s = re.sub(r"isNextNotCOD\(\s*\)", 'isNextNotCOD(dDA, s[m.end():], m.end())', s)        # isNextNotCOD(s)
    s = re.sub(r"isNextVerb\(\s*\)", 'isNextVerb(dDA, s[m.end():], m.end())', s)            # isNextVerb(s)
    s = re.sub(r"\bspell *[(]", '_oSpellChecker.isValid(', s)
    s = re.sub(r"[\\](\d+)", 'm.group(\\1)', s)
    return s


def uppercase (s, sLang):
    "(flag i is not enough): converts regex to uppercase regex: 'foo' becomes '[Ff][Oo][Oo]', but 'Bar' becomes 'B[Aa][Rr]'."
................................................................................
            nState = 4
        elif nState == 4:
            nState = 0
    return sUp


def countGroupInRegex (sRegex):

    try:
        return re.compile(sRegex).groups
    except:
        traceback.print_exc()
        print(sRegex)
    return 0


def createRule (s, nIdLine, sLang, bParagraph, dOptPriority):
    "returns rule as list [option name, regex, bCaseInsensitive, identifier, list of actions]"
    global dJSREGEXES
    global nRULEWITHOUTNAME

    #### OPTIONS
    sLineId = str(nIdLine) + ("p" if bParagraph else "s")
    sRuleId = sLineId









    sOption = False         # False or [a-z0-9]+ name
    nPriority = 4           # Default is 4, value must be between 0 and 9
    tGroups = None          # code for groups positioning (only useful for JavaScript)
    cCaseMode = 'i'         # i: case insensitive,  s: case sensitive,  u: uppercasing allowed
    cWordLimitLeft = '['    # [: word limit, <: no specific limit
    cWordLimitRight = ']'   # ]: word limit, >: no specific limit
    m = re.match("^__(?P<borders_and_case>[[<]\\w[]>])(?P<option>/[a-zA-Z0-9]+|)(?P<ruleid>\\(\\w+\\)|)(?P<priority>![0-9]|)__ *", s)
    if m:
        cWordLimitLeft = m.group('borders_and_case')[0]
        cCaseMode = m.group('borders_and_case')[1]
        cWordLimitRight = m.group('borders_and_case')[2]
        sOption = m.group('option')[1:]  if m.group('option')  else False
        if m.group('ruleid'):
            sRuleId =  m.group('ruleid')[1:-1]
................................................................................
        sRegex = sRegex.replace("(?i)", "")
        sRegex = uppercase(sRegex, sLang)
    else:
        print("# Unknown case mode [" + cCaseMode + "] at line " + sLineId)

    ## check regex
    try:
        z = re.compile(sRegex)
    except:
        print("# Regex error at line ", nIdLine)
        print(sRegex)
        traceback.print_exc()
        return None
    ## groups in non grouping parenthesis
    for x in re.finditer("\(\?:[^)]*\([[\w -]", sRegex):
        print("# Warning: groups inside non grouping parenthesis in regex at line " + sLineId)

    #### PARSE ACTIONS
    lActions = []
    nAction = 1
    for sAction in s.split(" <<- "):
        t = createAction(sRuleId + "_" + str(nAction), sAction, nGroup)
................................................................................
        if t:
            lActions.append(t)
    if not lActions:
        return None

    return [sOption, sRegex, bCaseInsensitive, sLineId, sRuleId, nPriority, lActions, tGroups]

















def createAction (sIdAction, sAction, nGroup):
    "returns an action to perform as a tuple (condition, action type, action[, iGroup [, message, URL ]])"
    global lFUNCTIONS

    m = re.search(r"([-~=>])(\d*|)>>", sAction)
    if not m:
        print("# No action at line " + sIdAction)
        return None

    #### CONDITION
    sCondition = sAction[:m.start()].strip()
    if sCondition:
        sCondition = prepareFunction(sCondition)
        lFUNCTIONS.append(("c_"+sIdAction, sCondition))
        for x in re.finditer("[.](?:group|start|end)[(](\d+)[)]", sCondition):
            if int(x.group(1)) > nGroup:
                print("# Error in groups in condition at line " + sIdAction + " ("+str(nGroup)+" groups only)")
        if ".match" in sCondition:
            print("# Error. JS compatibility. Don't use .match() in condition, use .search()")
        sCondition = "c_"+sIdAction
    else:
        sCondition = None

    #### iGroup / positioning
    iGroup = int(m.group(2)) if m.group(2) else 0
    if iGroup > nGroup:
        print("# Selected group > group number in regex at line " + sIdAction)
................................................................................
            sMsg = sAction[iMsg+3:].strip()
            sAction = sAction[:iMsg].strip()
            sURL = ""
            mURL = re.search("[|] *(https?://.*)", sMsg)
            if mURL:
                sURL = mURL.group(1).strip()
                sMsg = sMsg[:mURL.start(0)].strip()

            if sMsg[0:1] == "=":
                sMsg = prepareFunction(sMsg[1:])
                lFUNCTIONS.append(("m_"+sIdAction, sMsg))
                for x in re.finditer("group[(](\d+)[)]", sMsg):
                    if int(x.group(1)) > nGroup:
                        print("# Error in groups in message at line " + sIdAction + " ("+str(nGroup)+" groups only)")
                sMsg = "=m_"+sIdAction
            else:
                for x in re.finditer(r"\\(\d+)", sMsg):
                    if int(x.group(1)) > nGroup:
                        print("# Error in groups in message at line " + sIdAction + " ("+str(nGroup)+" groups only)")
                if re.search("[.]\\w+[(]", sMsg):
                    print("# Error in message at line " + sIdAction + ":  This message looks like code. Line should begin with =")



    if sAction[0:1] == "=" or cAction == "=":
        if "define" in sAction and not re.search(r"define\(\\\d+ *, *\[.*\] *\)", sAction):
            print("# Error in action at line " + sIdAction + ": second argument for define must be a list of strings")
        sAction = prepareFunction(sAction)
        sAction = sAction.replace("m.group(i[4])", "m.group("+str(iGroup)+")")
        for x in re.finditer("group[(](\d+)[)]", sAction):
            if int(x.group(1)) > nGroup:
                print("# Error in groups in replacement at line " + sIdAction + " ("+str(nGroup)+" groups only)")
    else:
        for x in re.finditer(r"\\(\d+)", sAction):
            if int(x.group(1)) > nGroup:
                print("# Error in groups in replacement at line " + sIdAction + " ("+str(nGroup)+" groups only)")
        if re.search("[.]\\w+[(]|sugg\\w+[(]", sAction):







            print("# Error in action at line " + sIdAction + ":  This action looks like code. Line should begin with =")


    if cAction == "-":
        ## error detected --> suggestion
        if not sAction:
            print("# Error in action at line " + sIdAction + ":  This action is empty.")
        if sAction[0:1] == "=":
            lFUNCTIONS.append(("s_"+sIdAction, sAction[1:]))
            sAction = "=s_"+sIdAction
        elif sAction.startswith('"') and sAction.endswith('"'):
            sAction = sAction[1:-1]
        if not sMsg:
            print("# Error in action at line " + sIdAction + ":  the message is empty.")
        return [sCondition, cAction, sAction, iGroup, sMsg, sURL]
    elif cAction == "~":
        ## text processor
        if not sAction:
            print("# Error in action at line " + sIdAction + ":  This action is empty.")
        if sAction[0:1] == "=":
            lFUNCTIONS.append(("p_"+sIdAction, sAction[1:]))
            sAction = "=p_"+sIdAction
        elif sAction.startswith('"') and sAction.endswith('"'):
            sAction = sAction[1:-1]
        return [sCondition, cAction, sAction, iGroup]
    elif cAction == "=":
        ## disambiguator
        if sAction[0:1] == "=":
            sAction = sAction[1:]
        if not sAction:
            print("# Error in action at line " + sIdAction + ":  This action is empty.")

        lFUNCTIONS.append(("d_"+sIdAction, sAction))
        sAction = "d_"+sIdAction
        return [sCondition, cAction, sAction]
    elif cAction == ">":
        ## no action, break loop if condition is False
        return [sCondition, cAction, ""]
    else:
        print("# Unknown action at line " + sIdAction)
        return None


def _calcRulesStats (lRules):

    d = {'=':0, '~': 0, '-': 0, '>': 0}
    for aRule in lRules:

        for aAction in aRule[6]:
            d[aAction[1]] = d[aAction[1]] + 1
    return (d, len(lRules))


def displayStats (lParagraphRules, lSentenceRules):

    print("  {:>18} {:>18} {:>18} {:>18}".format("DISAMBIGUATOR", "TEXT PROCESSOR", "GRAMMAR CHECKING", "REGEX"))
    d, nRule = _calcRulesStats(lParagraphRules)
    print("§ {:>10} actions {:>10} actions {:>10} actions  in {:>8} rules".format(d['='], d['~'], d['-'], nRule))
    d, nRule = _calcRulesStats(lSentenceRules)
    print("s {:>10} actions {:>10} actions {:>10} actions  in {:>8} rules".format(d['='], d['~'], d['-'], nRule))


................................................................................
            m = re.match("OPTGROUP/([a-z0-9]+):(.+)$", sLine)
            lStructOpt.append( (m.group(1), list(map(str.split, m.group(2).split(",")))) )
        elif sLine.startswith("OPTSOFTWARE:"):
            lOpt = [ [s, {}]  for s in sLine[12:].strip().split() ]  # don’t use tuples (s, {}), because unknown to JS
        elif sLine.startswith("OPT/"):
            m = re.match("OPT/([a-z0-9]+):(.+)$", sLine)
            for i, sOpt in enumerate(m.group(2).split()):
                lOpt[i][1][m.group(1)] =  eval(sOpt)
        elif sLine.startswith("OPTPRIORITY/"):
            m = re.match("OPTPRIORITY/([a-z0-9]+): *([0-9])$", sLine)
            dOptPriority[m.group(1)] = int(m.group(2))
        elif sLine.startswith("OPTLANG/"):
            m = re.match("OPTLANG/([a-z][a-z](?:_[A-Z][A-Z]|)):(.+)$", sLine)
            sLang = m.group(1)[:2]
            dOptLabel[sLang] = { "__optiontitle__": m.group(2).strip() }
................................................................................
    print("  options defined for: " + ", ".join([ t[0] for t in lOpt ]))
    dOptions = { "lStructOpt": lStructOpt, "dOptLabel": dOptLabel, "sDefaultUILang": sDefaultUILang }
    dOptions.update({ "dOpt"+k: v  for k, v in lOpt })
    return dOptions, dOptPriority


def printBookmark (nLevel, sComment, nLine):

    print("  {:>6}:  {}".format(nLine, "  " * nLevel + sComment))


def make (spLang, sLang, bJavaScript):
    "compile rules, returns a dictionary of values"
    # for clarity purpose, don’t create any file here

................................................................................
        lRules = open(spLang + "/rules.grx", 'r', encoding="utf-8").readlines()
    except:
        print("Error. Rules file in project [" + sLang + "] not found.")
        exit()

    # removing comments, zeroing empty lines, creating definitions, storing tests, merging rule lines
    print("  parsing rules...")
    global dDEF
    lLine = []
    lRuleLine = []
    lTest = []
    lOpt = []
    zBookmark = re.compile("^!!+")
    zGraphLink = re.compile(r"^@@@@GRAPHLINK>(\w+)@@@@")

    for i, sLine in enumerate(lRules, 1):
        if sLine.startswith('#END'):

            printBookmark(0, "BREAK BY #END", i)
            break
        elif sLine.startswith("#"):

            pass
        elif sLine.startswith("@@@@"):
            m = re.match(r"^@@@@GRAPHLINK>(\w+)@@@@", sLine.strip())
            if m:
                #lRuleLine.append(["@GRAPHLINK", m.group(1)])
                printBookmark(1, "@GRAPHLINK: " + m.group(1), i)
        elif sLine.startswith("DEF:"):

            m = re.match("DEF: +([a-zA-Z_][a-zA-Z_0-9]*) +(.+)$", sLine.strip())
            if m:
                dDEF["{"+m.group(1)+"}"] = m.group(2)
            else:
                print("Error in definition: ", end="")
                print(sLine.strip())
        elif sLine.startswith("TEST:"):

            lTest.append("{:<8}".format(i) + "  " + sLine[5:].strip())
        elif sLine.startswith("TODO:"):

            pass
        elif sLine.startswith(("OPTGROUP/", "OPTSOFTWARE:", "OPT/", "OPTLANG/", "OPTDEFAULTUILANG:", "OPTLABEL/", "OPTPRIORITY/")):

            lOpt.append(sLine)
        elif re.match("[  \t]*$", sLine):
            pass
        elif sLine.startswith("!!"):
            m = zBookmark.search(sLine)

            nExMk = len(m.group(0))
            if sLine[nExMk:].strip():
                printBookmark(nExMk-2, sLine[nExMk:-3].strip(), i)






















        elif sLine.startswith(("    ", "\t")):

            lRuleLine[len(lRuleLine)-1][1] += " " + sLine.strip()
        else:

            lRuleLine.append([i, sLine.strip()])

    # generating options files
    print("  parsing options...")
    try:
        dOptions, dOptPriority = prepareOptions(lOpt)
    except:
................................................................................
                        lParagraphRules.append(aRule)
                        lParagraphRulesJS.append(jsconv.pyRuleToJS(aRule, dJSREGEXES, sWORDLIMITLEFT))
                    else:
                        lSentenceRules.append(aRule)
                        lSentenceRulesJS.append(jsconv.pyRuleToJS(aRule, dJSREGEXES, sWORDLIMITLEFT))

    # creating file with all functions callable by rules
    print("  creating callables...")
    sPyCallables = "# generated code, do not edit\n"
    sJSCallables = "// generated code, do not edit\nconst oEvalFunc = {\n"
    for sFuncName, sReturn in lFUNCTIONS:
        cType = sFuncName[0:1]
        if cType == "c": # condition
            sParams = "s, sx, m, dDA, sCountry, bCondMemo"
        elif cType == "m": # message
            sParams = "s, m"
        elif cType == "s": # suggestion
            sParams = "s, m"
        elif cType == "p": # preprocessor
            sParams = "s, m"
        elif cType == "d": # disambiguator
            sParams = "s, m, dDA"
        else:
            print("# Unknown function type in [" + sFuncName + "]")
            continue

        sPyCallables += "def {} ({}):\n".format(sFuncName, sParams)
        sPyCallables += "    return " + sReturn + "\n"

        sJSCallables += "    {}: function ({})".format(sFuncName, sParams) + " {\n"
        sJSCallables += "        return " + jsconv.py2js(sReturn) + ";\n"
        sJSCallables += "    },\n"
    sJSCallables += "}\n"

    displayStats(lParagraphRules, lSentenceRules)

    print("Unnamed rules: " + str(nRULEWITHOUTNAME))

    d = { "callables": sPyCallables,
          "callablesJS": sJSCallables,
          "gctests": sGCTests,
          "gctestsJS": sGCTestsJS,
          "paragraph_rules": mergeRulesByOption(lParagraphRules),
          "sentence_rules": mergeRulesByOption(lSentenceRules),
          "paragraph_rules_JS": jsconv.writeRulesToJSArray(mergeRulesByOption(lParagraphRulesJS)),
          "sentence_rules_JS": jsconv.writeRulesToJSArray(mergeRulesByOption(lSentenceRulesJS)) }
    d.update(dOptions)





    return d
>
>
>






>














>
>
>
>
>
>
>
>
>
>
>
>
>

>










|
|

|

|
|
|
|
|
|
|
<
<
<
|
|
|
|
<
<
<
<
<
<







 







>













<


>
>
>
>
>
>
>
>
>






|







 







|






|







 







>
>
>
>
>
>
>
>
>
>
>
>
>
>
>



<
<









|
|
<
<


|







 







>


|
<
<
<
|

<
<
<
<
<
>

>

<
<


<
<
<

<
<
<
<
>
>
>
>
>
>
>
|
>



<
<

|
|







<
<

|
|







|
|
>
|
|

<
<
<






>


>
|
|




>







 







|







 







>







 







<
<



|
|



>



>

<
<
<
<
<

>







>


>


>

<
<

|
>



>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>

>
|

>







 







|
|
|

|
|
|
<
|
|
|
|
|
|
|



>


>



<





|
|
|
|
|
|
|
|
|

>
>
>
>
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61



62
63
64
65






66
67
68
69
70
71
72
...
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124

125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
...
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
...
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265


266
267
268
269
270
271
272
273
274
275
276


277
278
279
280
281
282
283
284
285
286
...
299
300
301
302
303
304
305
306
307
308
309



310
311





312
313
314
315


316
317



318




319
320
321
322
323
324
325
326
327
328
329
330


331
332
333
334
335
336
337
338
339
340


341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356



357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
...
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
...
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
...
450
451
452
453
454
455
456


457
458
459
460
461
462
463
464
465
466
467
468
469
470





471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487


488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
...
552
553
554
555
556
557
558
559
560
561
562
563
564
565

566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582

583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
"""
Grammalecte: compile rules
"""

import re
import traceback
import json

import compile_rules_js_convert as jsconv
import compile_rules_graph as crg


dDEF = {}
lFUNCTIONS = []

aRULESET = set()     # set of rule-ids to check if there is several rules with the same id
nRULEWITHOUTNAME = 0

dJSREGEXES = {}

sWORDLIMITLEFT  = r"(?<![\w.,–-])"   # r"(?<![-.,—])\b"  seems slower
sWORDLIMITRIGHT = r"(?![\w–-])"      # r"\b(?!-—)"       seems slower


def _rgb (r, g, b):
    return (r & 255) << 16 | (g & 255) << 8 | (b & 255)


def getRGB (sHex):
    if sHex:
        r = int(sHex[:2], 16)
        g = int(sHex[2:4], 16)
        b = int(sHex[4:], 16)
        return _rgb(r, g, b)
    return _rgb(0, 0, 0)


def prepareFunction (s):
    "convert simple rule syntax to a string of Python code"
    s = s.replace("__also__", "bCondMemo")
    s = s.replace("__else__", "not bCondMemo")
    s = re.sub(r"isStart *\(\)", 'before("^ *$|, *$")', s)
    s = re.sub(r"isRealStart *\(\)", 'before("^ *$")', s)
    s = re.sub(r"isStart0 *\(\)", 'before0("^ *$|, *$")', s)
    s = re.sub(r"isRealStart0 *\(\)", 'before0("^ *$")', s)
    s = re.sub(r"isEnd *\(\)", 'after("^ *$|^,")', s)
    s = re.sub(r"isRealEnd *\(\)", 'after("^ *$")', s)
    s = re.sub(r"isEnd0 *\(\)", 'after0("^ *$|^,")', s)
    s = re.sub(r"isRealEnd0 *\(\)", 'after0("^ *$")', s)
    s = re.sub(r"(select|exclude)[(][\\](\d+)", '\\1(dTokenPos, m.start(\\2), m.group(\\2)', s)
    s = re.sub(r"define[(][\\](\d+)", 'define(dTokenPos, m.start(\\1)', s)
    s = re.sub(r"(morph|morphex|displayInfo)[(][\\](\d+)", '\\1((m.start(\\2), m.group(\\2))', s)
    s = re.sub(r"(morph|morphex|displayInfo)[(]", '\\1(dTokenPos, ', s)
    s = re.sub(r"(sugg\w+|switch\w+)\(@", '\\1(m.group(i[4])', s)
    s = re.sub(r"word\(\s*1\b", 'nextword1(sSentence, m.end()', s)                                  # word(1)
    s = re.sub(r"word\(\s*-1\b", 'prevword1(sSentence, m.start()', s)                               # word(-1)
    s = re.sub(r"word\(\s*(\d)", 'nextword(sSentence, m.end(), \\1', s)                             # word(n)
    s = re.sub(r"word\(\s*-(\d)", 'prevword(sSentence, m.start(), \\1', s)                          # word(-n)
    s = re.sub(r"before\(\s*", 'look(sSentence[:m.start()], ', s)                                   # before(sSentence)
    s = re.sub(r"after\(\s*", 'look(sSentence[m.end():], ', s)                                      # after(sSentence)
    s = re.sub(r"textarea\(\s*", 'look(sSentence, ', s)                                             # textarea(sSentence)



    s = re.sub(r"/0", 'sSentence0[m.start():m.end()]', s)                                           # /0
    s = re.sub(r"before0\(\s*", 'look(sSentence0[:m.start()], ', s)                                 # before0(sSentence)
    s = re.sub(r"after0\(\s*", 'look(sSentence0[m.end():], ', s)                                    # after0(sSentence)
    s = re.sub(r"textarea0\(\s*", 'look(sSentence0, ', s)                                           # textarea0(sSentence)






    s = re.sub(r"\bspell *[(]", '_oSpellChecker.isValid(', s)
    s = re.sub(r"[\\](\d+)", 'm.group(\\1)', s)
    return s


def uppercase (s, sLang):
    "(flag i is not enough): converts regex to uppercase regex: 'foo' becomes '[Ff][Oo][Oo]', but 'Bar' becomes 'B[Aa][Rr]'."
................................................................................
            nState = 4
        elif nState == 4:
            nState = 0
    return sUp


def countGroupInRegex (sRegex):
    "returns the number of groups in <sRegex>"
    try:
        return re.compile(sRegex).groups
    except:
        traceback.print_exc()
        print(sRegex)
    return 0


def createRule (s, nIdLine, sLang, bParagraph, dOptPriority):
    "returns rule as list [option name, regex, bCaseInsensitive, identifier, list of actions]"
    global dJSREGEXES
    global nRULEWITHOUTNAME


    sLineId = str(nIdLine) + ("p" if bParagraph else "s")
    sRuleId = sLineId

    #### GRAPH CALL
    if s.startswith("@@@@"):
        if bParagraph:
            print("Error. Graph call can be made only after the first pass (sentence by sentence)")
            exit()
        return ["@@@@", s[4:], sLineId]

    #### OPTIONS
    sOption = False         # False or [a-z0-9]+ name
    nPriority = 4           # Default is 4, value must be between 0 and 9
    tGroups = None          # code for groups positioning (only useful for JavaScript)
    cCaseMode = 'i'         # i: case insensitive,  s: case sensitive,  u: uppercasing allowed
    cWordLimitLeft = '['    # [: word limit, <: no specific limit
    cWordLimitRight = ']'   # ]: word limit, >: no specific limit
    m = re.match("^__(?P<borders_and_case>[\\[<]\\w[\\]>])(?P<option>/[a-zA-Z0-9]+|)(?P<ruleid>\\(\\w+\\)|)(?P<priority>![0-9]|)__ *", s)
    if m:
        cWordLimitLeft = m.group('borders_and_case')[0]
        cCaseMode = m.group('borders_and_case')[1]
        cWordLimitRight = m.group('borders_and_case')[2]
        sOption = m.group('option')[1:]  if m.group('option')  else False
        if m.group('ruleid'):
            sRuleId =  m.group('ruleid')[1:-1]
................................................................................
        sRegex = sRegex.replace("(?i)", "")
        sRegex = uppercase(sRegex, sLang)
    else:
        print("# Unknown case mode [" + cCaseMode + "] at line " + sLineId)

    ## check regex
    try:
        re.compile(sRegex)
    except:
        print("# Regex error at line ", nIdLine)
        print(sRegex)
        traceback.print_exc()
        return None
    ## groups in non grouping parenthesis
    for x in re.finditer(r"\(\?:[^)]*\([\[\w -]", sRegex):
        print("# Warning: groups inside non grouping parenthesis in regex at line " + sLineId)

    #### PARSE ACTIONS
    lActions = []
    nAction = 1
    for sAction in s.split(" <<- "):
        t = createAction(sRuleId + "_" + str(nAction), sAction, nGroup)
................................................................................
        if t:
            lActions.append(t)
    if not lActions:
        return None

    return [sOption, sRegex, bCaseInsensitive, sLineId, sRuleId, nPriority, lActions, tGroups]


def checkReferenceNumbers (sText, sActionId, nToken):
    "check if token references in <sText> greater than <nToken> (debugging)"
    for x in re.finditer(r"\\(\d+)", sText):
        if int(x.group(1)) > nToken:
            print("# Error in token index at line " + sActionId + " ("+str(nToken)+" tokens only)")
            print(sText)


def checkIfThereIsCode (sText, sActionId):
    "check if there is code in <sText> (debugging)"
    if re.search("[.]\\w+[(]|sugg\\w+[(]|\\([0-9]|\\[[0-9]", sText):
        print("# Warning at line " + sActionId + ":  This message looks like code. Line should probably begin with =")
        print(sText)


def createAction (sIdAction, sAction, nGroup):
    "returns an action to perform as a tuple (condition, action type, action[, iGroup [, message, URL ]])"


    m = re.search(r"([-~=>])(\d*|)>>", sAction)
    if not m:
        print("# No action at line " + sIdAction)
        return None

    #### CONDITION
    sCondition = sAction[:m.start()].strip()
    if sCondition:
        sCondition = prepareFunction(sCondition)
        lFUNCTIONS.append(("_c_"+sIdAction, sCondition))
        checkReferenceNumbers(sCondition, sIdAction, nGroup)


        if ".match" in sCondition:
            print("# Error. JS compatibility. Don't use .match() in condition, use .search()")
        sCondition = "_c_"+sIdAction
    else:
        sCondition = None

    #### iGroup / positioning
    iGroup = int(m.group(2)) if m.group(2) else 0
    if iGroup > nGroup:
        print("# Selected group > group number in regex at line " + sIdAction)
................................................................................
            sMsg = sAction[iMsg+3:].strip()
            sAction = sAction[:iMsg].strip()
            sURL = ""
            mURL = re.search("[|] *(https?://.*)", sMsg)
            if mURL:
                sURL = mURL.group(1).strip()
                sMsg = sMsg[:mURL.start(0)].strip()
            checkReferenceNumbers(sMsg, sIdAction, nGroup)
            if sMsg[0:1] == "=":
                sMsg = prepareFunction(sMsg[1:])
                lFUNCTIONS.append(("_m_"+sIdAction, sMsg))



                sMsg = "=_m_"+sIdAction
            else:





                checkIfThereIsCode(sMsg, sIdAction)

    checkReferenceNumbers(sAction, sIdAction, nGroup)
    if sAction[0:1] == "=" or cAction == "=":


        sAction = prepareFunction(sAction)
        sAction = sAction.replace("m.group(i[4])", "m.group("+str(iGroup)+")")



    else:




        checkIfThereIsCode(sAction, sIdAction)

    if cAction == ">":
        ## no action, break loop if condition is False
        return [sCondition, cAction, ""]

    if not sAction:
        print("# Error in action at line " + sIdAction + ":  This action is empty.")
        return None

    if cAction == "-":
        ## error detected --> suggestion


        if sAction[0:1] == "=":
            lFUNCTIONS.append(("_s_"+sIdAction, sAction[1:]))
            sAction = "=_s_"+sIdAction
        elif sAction.startswith('"') and sAction.endswith('"'):
            sAction = sAction[1:-1]
        if not sMsg:
            print("# Error in action at line " + sIdAction + ":  the message is empty.")
        return [sCondition, cAction, sAction, iGroup, sMsg, sURL]
    elif cAction == "~":
        ## text processor


        if sAction[0:1] == "=":
            lFUNCTIONS.append(("_p_"+sIdAction, sAction[1:]))
            sAction = "=_p_"+sIdAction
        elif sAction.startswith('"') and sAction.endswith('"'):
            sAction = sAction[1:-1]
        return [sCondition, cAction, sAction, iGroup]
    elif cAction == "=":
        ## disambiguator
        if sAction[0:1] == "=":
            sAction = sAction[1:]
        if "define" in sAction and not re.search(r"define\(dTokenPos, *m\.start.*, \[.*\] *\)", sAction):
            print("# Error in action at line " + sIdAction + ": second argument for define must be a list of strings")
            print(sAction)
        lFUNCTIONS.append(("_d_"+sIdAction, sAction))
        sAction = "_d_"+sIdAction
        return [sCondition, cAction, sAction]



    else:
        print("# Unknown action at line " + sIdAction)
        return None


def _calcRulesStats (lRules):
    "count rules and actions"
    d = {'=':0, '~': 0, '-': 0, '>': 0}
    for aRule in lRules:
        if aRule[0] != "@@@@":
            for aAction in aRule[6]:
                d[aAction[1]] = d[aAction[1]] + 1
    return (d, len(lRules))


def displayStats (lParagraphRules, lSentenceRules):
    "display rules numbers"
    print("  {:>18} {:>18} {:>18} {:>18}".format("DISAMBIGUATOR", "TEXT PROCESSOR", "GRAMMAR CHECKING", "REGEX"))
    d, nRule = _calcRulesStats(lParagraphRules)
    print("§ {:>10} actions {:>10} actions {:>10} actions  in {:>8} rules".format(d['='], d['~'], d['-'], nRule))
    d, nRule = _calcRulesStats(lSentenceRules)
    print("s {:>10} actions {:>10} actions {:>10} actions  in {:>8} rules".format(d['='], d['~'], d['-'], nRule))


................................................................................
            m = re.match("OPTGROUP/([a-z0-9]+):(.+)$", sLine)
            lStructOpt.append( (m.group(1), list(map(str.split, m.group(2).split(",")))) )
        elif sLine.startswith("OPTSOFTWARE:"):
            lOpt = [ [s, {}]  for s in sLine[12:].strip().split() ]  # don’t use tuples (s, {}), because unknown to JS
        elif sLine.startswith("OPT/"):
            m = re.match("OPT/([a-z0-9]+):(.+)$", sLine)
            for i, sOpt in enumerate(m.group(2).split()):
                lOpt[i][1][m.group(1)] = eval(sOpt)
        elif sLine.startswith("OPTPRIORITY/"):
            m = re.match("OPTPRIORITY/([a-z0-9]+): *([0-9])$", sLine)
            dOptPriority[m.group(1)] = int(m.group(2))
        elif sLine.startswith("OPTLANG/"):
            m = re.match("OPTLANG/([a-z][a-z](?:_[A-Z][A-Z]|)):(.+)$", sLine)
            sLang = m.group(1)[:2]
            dOptLabel[sLang] = { "__optiontitle__": m.group(2).strip() }
................................................................................
    print("  options defined for: " + ", ".join([ t[0] for t in lOpt ]))
    dOptions = { "lStructOpt": lStructOpt, "dOptLabel": dOptLabel, "sDefaultUILang": sDefaultUILang }
    dOptions.update({ "dOpt"+k: v  for k, v in lOpt })
    return dOptions, dOptPriority


def printBookmark (nLevel, sComment, nLine):
    "print bookmark within the rules file"
    print("  {:>6}:  {}".format(nLine, "  " * nLevel + sComment))


def make (spLang, sLang, bJavaScript):
    "compile rules, returns a dictionary of values"
    # for clarity purpose, don’t create any file here

................................................................................
        lRules = open(spLang + "/rules.grx", 'r', encoding="utf-8").readlines()
    except:
        print("Error. Rules file in project [" + sLang + "] not found.")
        exit()

    # removing comments, zeroing empty lines, creating definitions, storing tests, merging rule lines
    print("  parsing rules...")


    lRuleLine = []
    lTest = []
    lOpt = []
    bGraph = False
    lGraphRule = []

    for i, sLine in enumerate(lRules, 1):
        if sLine.startswith('#END'):
            # arbitrary end
            printBookmark(0, "BREAK BY #END", i)
            break
        elif sLine.startswith("#"):
            # comment
            pass





        elif sLine.startswith("DEF:"):
            # definition
            m = re.match("DEF: +([a-zA-Z_][a-zA-Z_0-9]*) +(.+)$", sLine.strip())
            if m:
                dDEF["{"+m.group(1)+"}"] = m.group(2)
            else:
                print("Error in definition: ", end="")
                print(sLine.strip())
        elif sLine.startswith("TEST:"):
            # test
            lTest.append("{:<8}".format(i) + "  " + sLine[5:].strip())
        elif sLine.startswith("TODO:"):
            # todo
            pass
        elif sLine.startswith(("OPTGROUP/", "OPTSOFTWARE:", "OPT/", "OPTLANG/", "OPTDEFAULTUILANG:", "OPTLABEL/", "OPTPRIORITY/")):
            # options
            lOpt.append(sLine)


        elif sLine.startswith("!!"):
            # bookmark
            m = re.match("!!+", sLine)
            nExMk = len(m.group(0))
            if sLine[nExMk:].strip():
                printBookmark(nExMk-2, sLine[nExMk:-3].strip(), i)
        # Graph rules
        elif sLine.startswith("@@@@GRAPH:"):
            # rules graph call
            m = re.match(r"@@@@GRAPH: *(\w+)", sLine.strip())
            if m:
                printBookmark(0, "GRAPH: " + m.group(1), i)
                lRuleLine.append([i, "@@@@"+m.group(1)])
                bGraph = True
            lGraphRule.append([i, sLine])
            bGraph = True
        elif sLine.startswith("@@@@END_GRAPH"):
            #lGraphRule.append([i, sLine])
            printBookmark(0, "ENDGRAPH", i)
            bGraph = False
        elif re.match("@@@@ *$", sLine):
            pass
        elif bGraph:
            lGraphRule.append([i, sLine])
        # Regex rules
        elif re.match("[  \t]*$", sLine):
            # empty line
            pass
        elif sLine.startswith(("    ", "\t")):
            # rule (continuation)
            lRuleLine[-1][1] += " " + sLine.strip()
        else:
            # new rule
            lRuleLine.append([i, sLine.strip()])

    # generating options files
    print("  parsing options...")
    try:
        dOptions, dOptPriority = prepareOptions(lOpt)
    except:
................................................................................
                        lParagraphRules.append(aRule)
                        lParagraphRulesJS.append(jsconv.pyRuleToJS(aRule, dJSREGEXES, sWORDLIMITLEFT))
                    else:
                        lSentenceRules.append(aRule)
                        lSentenceRulesJS.append(jsconv.pyRuleToJS(aRule, dJSREGEXES, sWORDLIMITLEFT))

    # creating file with all functions callable by rules
    print("  creating callables for regex rules...")
    sPyCallables = ""
    sJSCallables = ""
    for sFuncName, sReturn in lFUNCTIONS:
        if sFuncName.startswith("_c_"): # condition
            sParams = "sSentence, sSentence0, m, dTokenPos, sCountry, bCondMemo"
        elif sFuncName.startswith("_m_"): # message

            sParams = "sSentence, m"
        elif sFuncName.startswith("_s_"): # suggestion
            sParams = "sSentence, m"
        elif sFuncName.startswith("_p_"): # preprocessor
            sParams = "sSentence, m"
        elif sFuncName.startswith("_d_"): # disambiguator
            sParams = "sSentence, m, dTokenPos"
        else:
            print("# Unknown function type in [" + sFuncName + "]")
            continue
        # Python
        sPyCallables += "def {} ({}):\n".format(sFuncName, sParams)
        sPyCallables += "    return " + sReturn + "\n"
        # JavaScript
        sJSCallables += "    {}: function ({})".format(sFuncName, sParams) + " {\n"
        sJSCallables += "        return " + jsconv.py2js(sReturn) + ";\n"
        sJSCallables += "    },\n"


    displayStats(lParagraphRules, lSentenceRules)

    print("Unnamed rules: " + str(nRULEWITHOUTNAME))

    dVars = {   "callables": sPyCallables,
                "callablesJS": sJSCallables,
                "gctests": sGCTests,
                "gctestsJS": sGCTestsJS,
                "paragraph_rules": mergeRulesByOption(lParagraphRules),
                "sentence_rules": mergeRulesByOption(lSentenceRules),
                "paragraph_rules_JS": jsconv.writeRulesToJSArray(mergeRulesByOption(lParagraphRulesJS)),
                "sentence_rules_JS": jsconv.writeRulesToJSArray(mergeRulesByOption(lSentenceRulesJS)) }
    dVars.update(dOptions)

    # compile graph rules
    dVars2 = crg.make(lGraphRule, dDEF, sLang, dOptPriority, bJavaScript)
    dVars.update(dVars2)

    return dVars

Added compile_rules_graph.py version [f9b11563d7].



























































































































































































































































































































































































































































































































































































































































































































































































































































































































































>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
"""
Grammalecte: compile rules
Create a Direct Acyclic Rule Graphs (DARGs)
"""

import re
import traceback
import json

import darg
import compile_rules_js_convert as jsconv


dACTIONS = {}
dFUNCTIONS = {}
dFUNCNAME = {}


def createFunction (sType, sActionId, sCode, bStartWithEqual=False):
    "create a function (stored in <dFUNCTIONS>) and return function name"
    sCode = prepareFunction(sCode)
    if sType not in dFUNCNAME:
        dFUNCNAME[sType] = {}
    if sCode not in dFUNCNAME[sType]:
        dFUNCNAME[sType][sCode] = len(dFUNCNAME[sType])+1
    sFuncName = "_g_" + sType + "_" + str(dFUNCNAME[sType][sCode])
    dFUNCTIONS[sFuncName] = sCode
    return sFuncName  if not bStartWithEqual  else "="+sFuncName


def storeAction (sActionId, aAction):
    "store <aAction> in <dACTIONS> avoiding duplicates"
    nVar = 0
    while True:
        sActionName = sActionId + "_" + str(nVar)
        if sActionName not in dACTIONS:
            dACTIONS[sActionName] = aAction
            return sActionName
        elif aAction == dACTIONS[sActionName]:
            return sActionName
        nVar += 1


def prepareFunction (sCode):
    "convert simple rule syntax to a string of Python code"
    if sCode[0:1] == "=":
        sCode = sCode[1:]
    sCode = sCode.replace("__also__", "bCondMemo")
    sCode = sCode.replace("__else__", "not bCondMemo")
    sCode = sCode.replace("sContext", "_sAppContext")
    sCode = re.sub(r"(morph|morphVC|analyse|value|tag|displayInfo)[(]\\(\d+)", 'g_\\1(lToken[nTokenOffset+\\2]', sCode)
    sCode = re.sub(r"(morph|morphVC|analyse|value|tag|displayInfo)[(]\\-(\d+)", 'g_\\1(lToken[nLastToken-\\2+1]', sCode)
    sCode = re.sub(r"(select|exclude|define|define_from)[(][\\](\d+)", 'g_\\1(lToken[nTokenOffset+\\2]', sCode)
    sCode = re.sub(r"(select|exclude|define|define_from)[(][\\]-(\d+)", 'g_\\1(lToken[nLastToken-\\2+1]', sCode)
    sCode = re.sub(r"(tag_before|tag_after)[(][\\](\d+)", 'g_\\1(lToken[nTokenOffset+\\2], dTags', sCode)
    sCode = re.sub(r"(tag_before|tag_after)[(][\\]-(\d+)", 'g_\\1(lToken[nLastToken-\\2+1], dTags', sCode)
    sCode = re.sub(r"space_after[(][\\](\d+)", 'g_space_between_tokens(lToken[nTokenOffset+\\1], lToken[nTokenOffset+\\1+1]', sCode)
    sCode = re.sub(r"space_after[(][\\]-(\d+)", 'g_space_between_tokens(lToken[nLastToken-\\1+1], lToken[nLastToken-\\1+2]', sCode)
    sCode = re.sub(r"analyse_with_next[(][\\](\d+)", 'g_merged_analyse(lToken[nTokenOffset+\\1], lToken[nTokenOffset+\\1+1]', sCode)
    sCode = re.sub(r"analyse_with_next[(][\\]-(\d+)", 'g_merged_analyse(lToken[nLastToken-\\1+1], lToken[nLastToken-\\1+2]', sCode)
    sCode = re.sub(r"(morph|analyse|tag|value)\(>1", 'g_\\1(lToken[nLastToken+1]', sCode)                       # next token
    sCode = re.sub(r"(morph|analyse|tag|value)\(<1", 'g_\\1(lToken[nTokenOffset]', sCode)                       # previous token
    sCode = re.sub(r"(morph|analyse|tag|value)\(>(\d+)", 'g_\\1(g_token(lToken, nLastToken+\\2)', sCode)        # next token
    sCode = re.sub(r"(morph|analyse|tag|value)\(<(\d+)", 'g_\\1(g_token(lToken, nTokenOffset+1-\\2)', sCode)    # previous token
    sCode = re.sub(r"\bspell *[(]", '_oSpellChecker.isValid(', sCode)
    sCode = re.sub(r"\bbefore\(\s*", 'look(sSentence[:lToken[1+nTokenOffset]["nStart"]], ', sCode)          # before(sCode)
    sCode = re.sub(r"\bafter\(\s*", 'look(sSentence[lToken[nLastToken]["nEnd"]:], ', sCode)                 # after(sCode)
    sCode = re.sub(r"\bbefore0\(\s*", 'look(sSentence0[:lToken[1+nTokenOffset]["nStart"]], ', sCode)        # before0(sCode)
    sCode = re.sub(r"\bafter0\(\s*", 'look(sSentence[lToken[nLastToken]["nEnd"]:], ', sCode)                # after0(sCode)
    sCode = re.sub(r"analyseWord[(]", 'analyse(', sCode)
    sCode = re.sub(r"[\\](\d+)", 'lToken[nTokenOffset+\\1]["sValue"]', sCode)
    sCode = re.sub(r"[\\]-(\d+)", 'lToken[nLastToken-\\1+1]["sValue"]', sCode)
    return sCode


def genTokenLines (sTokenLine, dDef):
    "tokenize a string and return a list of lines of tokens"
    lToken = sTokenLine.split()
    lTokenLines = None
    for sToken in lToken:
        # optional token?
        bNullPossible = sToken.startswith("?") and sToken.endswith("¿")
        if bNullPossible:
            sToken = sToken[1:-1]
        # token with definition?
        if sToken.startswith("({") and sToken.endswith("})") and sToken[1:-1] in dDef:
            sToken = "(" + dDef[sToken[1:-1]] + ")"
        elif sToken.startswith("{") and sToken.endswith("}") and sToken in dDef:
            sToken = dDef[sToken]
        if ( (sToken.startswith("[") and sToken.endswith("]")) or (sToken.startswith("([") and sToken.endswith("])")) ):
            # multiple token
            bSelectedGroup = sToken.startswith("(") and sToken.endswith(")")
            if bSelectedGroup:
                sToken = sToken[1:-1]
            lNewToken = sToken[1:-1].split("|")
            if not lTokenLines:
                lTokenLines = [ ["("+s+")"]  for s  in lNewToken ]  if bSelectedGroup  else [ [s]  for s  in lNewToken ]
                if bNullPossible:
                    lTokenLines.extend([ []  for i  in range(len(lNewToken)+1) ])
            else:
                lNewTemp = []
                if bNullPossible:
                    for aRule in lTokenLines:
                        for sElem in lNewToken:
                            aNewRule = list(aRule)
                            aNewRule.append(sElem)
                            lNewTemp.append(aNewRule)
                else:
                    sElem1 = lNewToken.pop(0)
                    for aRule in lTokenLines:
                        for sElem in lNewToken:
                            aNewRule = list(aRule)
                            aNewRule.append("(" + sElem + ")"  if bSelectedGroup  else sElem)
                            lNewTemp.append(aNewRule)
                        aRule.append("(" + sElem1 + ")"  if bSelectedGroup  else sElem1)
                lTokenLines.extend(lNewTemp)
        else:
            # simple token
            if not lTokenLines:
                lTokenLines = [[sToken], []]  if bNullPossible  else [[sToken]]
            else:
                if bNullPossible:
                    lNewTemp = []
                    for aRule in lTokenLines:
                        lNew = list(aRule)
                        lNew.append(sToken)
                        lNewTemp.append(lNew)
                    lTokenLines.extend(lNewTemp)
                else:
                    for aRule in lTokenLines:
                        aRule.append(sToken)
    for aRule in lTokenLines:
        yield aRule


def createRule (iLine, sRuleName, sTokenLine, iActionBlock, sActions, nPriority, dOptPriority, dDef):
    "generator: create rule as list"
    # print(iLine, "//", sRuleName, "//", sTokenLine, "//", sActions, "//", nPriority)
    for lToken in genTokenLines(sTokenLine, dDef):
        # Calculate positions
        dPos = {}   # key: iGroup, value: iToken
        iGroup = 0
        #if iLine == 3971: # debug
        #    print(" ".join(lToken))
        for i, sToken in enumerate(lToken):
            if sToken.startswith("(") and sToken.endswith(")"):
                lToken[i] = sToken[1:-1]
                iGroup += 1
                dPos[iGroup] = i + 1    # we add 1, for we count tokens from 1 to n (not from 0)

        # Parse actions
        for iAction, sAction in enumerate(sActions.split(" <<- ")):
            sAction = sAction.strip()
            if sAction:
                sActionId = sRuleName + "__b" + str(iActionBlock) + "_a" + str(iAction)
                aAction = createAction(sActionId, sAction, nPriority, dOptPriority, len(lToken), dPos)
                if aAction:
                    sActionName = storeAction(sActionId, aAction)
                    lResult = list(lToken)
                    lResult.extend(["##"+str(iLine), sActionName])
                    #if iLine == 13341:
                    #    print("  ".join(lToken))
                    #    print(sActionId, aAction)
                    yield lResult
                else:
                    print(" # Error on action at line:", iLine)
                    print(sTokenLine, "\n", sActions)


def changeReferenceToken (sText, dPos):
    "change group reference in <sText> with values in <dPos>"
    for i in range(len(dPos), 0, -1):
        sText = sText.replace("\\"+str(i), "\\"+str(dPos[i]))
    return sText


def checkTokenNumbers (sText, sActionId, nToken):
    "check if token references in <sText> greater than <nToken> (debugging)"
    for x in re.finditer(r"\\(\d+)", sText):
        if int(x.group(1)) > nToken:
            print("# Error in token index at line " + sActionId + " ("+str(nToken)+" tokens only)")
            print(sText)


def checkIfThereIsCode (sText, sActionId):
    "check if there is code in <sText> (debugging)"
    if re.search("[.]\\w+[(]|sugg\\w+[(]|\\([0-9]|\\[[0-9]", sText):
        print("# Warning at line " + sActionId + ":  This message looks like code. Line should probably begin with =")
        print(sText)


def createAction (sActionId, sAction, nPriority, dOptPriority, nToken, dPos):
    "create action rule as a list"
    # Option
    sOption = False
    m = re.match("/(\\w+)/", sAction)
    if m:
        sOption = m.group(1)
        sAction = sAction[m.end():].strip()
    if nPriority == -1:
        nPriority = dOptPriority.get(sOption, 4)

    # valid action?
    m = re.search(r"(?P<action>[-~=/%>])(?P<start>-?\d+\.?|)(?P<end>:\.?-?\d+|)(?P<casing>:|)>>", sAction)
    if not m:
        print(" # Error. No action found at: ", sActionId)
        return None

    # Condition
    sCondition = sAction[:m.start()].strip()
    if sCondition:
        sCondition = changeReferenceToken(sCondition, dPos)
        sCondition = createFunction("cond", sActionId, sCondition)
    else:
        sCondition = ""

    # Case sensitivity
    bCaseSensitivity = False if m.group("casing") == ":" else True

    # Action
    cAction = m.group("action")
    sAction = sAction[m.end():].strip()
    sAction = changeReferenceToken(sAction, dPos)
    # target
    cStartLimit = "<"
    cEndLimit = ">"
    if not m.group("start"):
        iStartAction = 1
        iEndAction = 0
    else:
        if cAction != "-" and (m.group("start").endswith(".") or m.group("end").startswith(":.")):
            print(" # Error. Wrong selection on tokens.", sActionId)
            return None
        if m.group("start").endswith("."):
            cStartLimit = ">"
        iStartAction = int(m.group("start").rstrip("."))
        if not m.group("end"):
            iEndAction = iStartAction
        else:
            if m.group("end").startswith(":."):
                cEndLimit = "<"
            iEndAction = int(m.group("end").lstrip(":."))
    if dPos and m.group("start"):
        try:
            iStartAction = dPos.get(iStartAction, iStartAction)
            if iEndAction:
                iEndAction = dPos.get(iEndAction, iEndAction)
        except:
            print("# Error. Wrong groups in: " + sActionId)
            print("  iStartAction:", iStartAction, "iEndAction:", iEndAction)
            print(" ", dPos)
    if iStartAction < 0:
        iStartAction += 1
    if iEndAction < 0:
        iEndAction += 1

    if cAction == "-":
        ## error
        iMsg = sAction.find(" # ")
        if iMsg == -1:
            sMsg = "# Error. Error message not found."
            sURL = ""
            print(sMsg + " Action id: " + sActionId)
        else:
            sMsg = sAction[iMsg+3:].strip()
            sAction = sAction[:iMsg].strip()
            sURL = ""
            mURL = re.search("[|] *(https?://.*)", sMsg)
            if mURL:
                sURL = mURL.group(1).strip()
                sMsg = sMsg[:mURL.start(0)].strip()
            checkTokenNumbers(sMsg, sActionId, nToken)
            if sMsg[0:1] == "=":
                sMsg = createFunction("msg", sActionId, sMsg, True)
            else:
                checkIfThereIsCode(sMsg, sActionId)

    # checking consistancy
    checkTokenNumbers(sAction, sActionId, nToken)

    if cAction == ">":
        ## no action, break loop if condition is False
        return [sOption, sCondition, cAction, ""]

    if not sAction and cAction != "%":
        print("# Error in action at line " + sActionId + ":  This action is empty.")

    if sAction[0:1] != "=" and cAction != "=":
        checkIfThereIsCode(sAction, sActionId)

    if cAction == "-":
        ## error detected --> suggestion
        if sAction[0:1] == "=":
            sAction = createFunction("sugg", sActionId, sAction, True)
        elif sAction.startswith('"') and sAction.endswith('"'):
            sAction = sAction[1:-1]
        if not sMsg:
            print("# Error in action at line " + sActionId + ":  The message is empty.")
        return [sOption, sCondition, cAction, sAction, iStartAction, iEndAction, cStartLimit, cEndLimit, bCaseSensitivity, nPriority, sMsg, sURL]
    elif cAction == "~":
        ## text processor
        if sAction[0:1] == "=":
            sAction = createFunction("tp", sActionId, sAction, True)
        elif sAction.startswith('"') and sAction.endswith('"'):
            sAction = sAction[1:-1]
        return [sOption, sCondition, cAction, sAction, iStartAction, iEndAction, bCaseSensitivity]
    elif cAction == "%" or cAction == "/":
        ## tags
        return [sOption, sCondition, cAction, sAction, iStartAction, iEndAction]
    elif cAction == "=":
        ## disambiguator
        if "define(" in sAction and not re.search(r"define\(\\-?\d+ *, *\[.*\] *\)", sAction):
            print("# Error in action at line " + sActionId + ": second argument for <define> must be a list of strings")
        sAction = createFunction("da", sActionId, sAction)
        return [sOption, sCondition, cAction, sAction]
    else:
        print(" # Unknown action.", sActionId)
        return None


def make (lRule, dDef, sLang, dOptPriority, bJavaScript):
    "compile rules, returns a dictionary of values"
    # for clarity purpose, don’t create any file here

    # removing comments, zeroing empty lines, creating definitions, storing tests, merging rule lines
    print("  parsing rules...")
    lTokenLine = []
    sActions = ""
    nPriority = -1
    dAllGraph = {}
    sGraphName = ""
    iActionBlock = 0
    aRuleName = set()

    for i, sLine in lRule:
        sLine = sLine.rstrip()
        if "\t" in sLine:
            # tabulation not allowed
            print("Error. Tabulation at line: ", i)
            exit()
        elif sLine.startswith("@@@@GRAPH: "):
            # rules graph call
            m = re.match(r"@@@@GRAPH: *(\w+)", sLine.strip())
            if m:
                sGraphName = m.group(1)
                if sGraphName in dAllGraph:
                    print("Error at line " + i + ". Graph name <" + sGraphName + "> already exists.")
                    exit()
                dAllGraph[sGraphName] = []
            else:
                print("Error. Graph name not found at line", i)
                exit()
        elif sLine.startswith("__") and sLine.endswith("__"):
            # new rule group
            m = re.match("__(\\w+)(!\\d|)__", sLine)
            if m:
                sRuleName = m.group(1)
                if sRuleName in aRuleName:
                    print("Error at line " + i + ". Rule name <" + sRuleName + "> already exists.")
                    exit()
                iActionBlock = 1
                nPriority = int(m.group(2)[1:]) if m.group(2)  else -1
            else:
                print("Syntax error in rule group: ", sLine, " -- line:", i)
                exit()
        elif re.search("^    +<<- ", sLine) or (sLine.startswith("        ") and not sLine.startswith("        ||")) \
                or re.search("^    +#", sLine) or re.search(r"[-~=>/%](?:-?\d\.?(?::\.?-?\d+|)|)>> ", sLine) :
            # actions
            sActions += " " + sLine.strip()
        elif re.match("[  ]*$", sLine):
            # empty line to end merging
            if not lTokenLine:
                continue
            if not sActions:
                print("Error. No action found at line:", i)
                exit()
            if not sGraphName:
                print("Error. All rules must belong to a named graph. Line: ", i)
                exit()
            for j, sTokenLine in lTokenLine:
                dAllGraph[sGraphName].append((j, sRuleName, sTokenLine, iActionBlock, sActions, nPriority))
            lTokenLine.clear()
            sActions = ""
            iActionBlock += 1
        elif sLine.startswith("    "):
            # tokens
            sLine = sLine.strip()
            if sLine.startswith("||"):
                iPrevLine, sPrevLine = lTokenLine[-1]
                lTokenLine[-1] = [iPrevLine, sPrevLine + " " + sLine[2:]]
            else:
                lTokenLine.append([i, sLine])
        else:
            print("Unknown line:")
            print(sLine)

    # processing rules
    print("  preparing rules...")
    for sGraphName, lRuleLine in dAllGraph.items():
        print("{:>8,} rules in {:<24} ".format(len(lRuleLine), "<"+sGraphName+">"), end="")
        lPreparedRule = []
        for i, sRuleGroup, sTokenLine, iActionBlock, sActions, nPriority in lRuleLine:
            for lRule in createRule(i, sRuleGroup, sTokenLine, iActionBlock, sActions, nPriority, dOptPriority, dDef):
                lPreparedRule.append(lRule)
        # Graph creation
        oDARG = darg.DARG(lPreparedRule, sLang)
        dAllGraph[sGraphName] = oDARG.createGraph()
        # Debugging
        if False:
            print("\nRULES:")
            for e in lPreparedRule:
                if e[-2] == "##2211":
                    print(e)
        if False:
            print("\nGRAPH:", sGraphName)
            for k, v in dAllGraph[sGraphName].items():
                print(k, "\t", v)

    # creating file with all functions callable by rules
    print("  creating callables for graph rules...")
    sPyCallables = ""
    sJSCallables = ""
    for sFuncName, sReturn in dFUNCTIONS.items():
        if sFuncName.startswith("_g_cond_"): # condition
            sParams = "lToken, nTokenOffset, nLastToken, sCountry, bCondMemo, dTags, sSentence, sSentence0"
        elif sFuncName.startswith("g_msg_"): # message
            sParams = "lToken, nTokenOffset, nLastToken"
        elif sFuncName.startswith("_g_sugg_"): # suggestion
            sParams = "lToken, nTokenOffset, nLastToken"
        elif sFuncName.startswith("_g_tp_"): # text preprocessor
            sParams = "lToken, nTokenOffset, nLastToken"
        elif sFuncName.startswith("_g_da_"): # disambiguator
            sParams = "lToken, nTokenOffset, nLastToken"
        else:
            print("# Unknown function type in [" + sFuncName + "]")
            continue
        # Python
        sPyCallables += "def {} ({}):\n".format(sFuncName, sParams)
        sPyCallables += "    return " + sReturn + "\n"
        # JavaScript
        sJSCallables += "    {}: function ({})".format(sFuncName, sParams) + " {\n"
        sJSCallables += "        return " + jsconv.py2js(sReturn) + ";\n"
        sJSCallables += "    },\n"

    # Debugging
    if False:
        print("\nActions:")
        for sActionName, aAction in dACTIONS.items():
            print(sActionName, aAction)
        print("\nFunctions:")
        print(sPyCallables)

    # Result
    return {
        "graph_callables": sPyCallables,
        "graph_callablesJS": sJSCallables,
        "rules_graphs": dAllGraph,
        "rules_graphsJS": str(dAllGraph).replace("True", "true").replace("False", "false"),
        "rules_actions": dACTIONS,
        "rules_actionsJS": str(dACTIONS).replace("True", "true").replace("False", "false")
    }

Modified compile_rules_js_convert.py from [5ad87f3f46] to [3755ac925a].


1

2
3
4
5
6
7
8
..
14
15
16
17
18
19
20
21
22
23
24

25
26
27
28
29
30
31
..
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53

54
55
56
57


58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
...
114
115
116
117
118
119
120

121



122
123
124
125
126
127
128
129
130
131
132
133
134

135
136

137
138
139
140
141
142
143
144
145
146
147





148
149
150
151
152

153
154
155
156

# Convert Python code to JavaScript code


import copy
import re
import json


def py2js (sCode):
................................................................................
    sCode = sCode.replace(" r'", " '")
    sCode = sCode.replace(',r"', ',"')
    sCode = sCode.replace(",r'", ",'")
    # operators
    sCode = sCode.replace(" and ", " && ")
    sCode = sCode.replace(" or ", " || ")
    sCode = re.sub("\\bnot\\b", "!", sCode)
    sCode = re.sub("(.+) if (.+) else (.+)", "(\\2) ? \\1 : \\3", sCode)
    # boolean
    sCode = sCode.replace("False", "false")
    sCode = sCode.replace("True", "true")

    sCode = sCode.replace("bool", "Boolean")
    # methods
    sCode = sCode.replace(".__len__()", ".length")
    sCode = sCode.replace(".endswith", ".endsWith")
    sCode = sCode.replace(".find", ".indexOf")
    sCode = sCode.replace(".startswith", ".startsWith")
    sCode = sCode.replace(".lower", ".toLowerCase")
................................................................................
    sCode = sCode.replace(".strip", ".gl_trim")
    sCode = sCode.replace(".lstrip", ".gl_trimLeft")
    sCode = sCode.replace(".rstrip", ".gl_trimRight")
    sCode = sCode.replace('.replace("."', r".replace(/\./g")
    sCode = sCode.replace('.replace("..."', r".replace(/\.\.\./g")
    sCode = re.sub(r'.replace\("([^"]+)" ?,', ".replace(/\\1/g,", sCode)
    # regex
    sCode = re.sub('re.search\\("([^"]+)", *(m.group\\(\\d\\))\\)', "(\\2.search(/\\1/) >= 0)", sCode)
    sCode = re.sub(".search\\(/\\(\\?i\\)([^/]+)/\\) >= 0\\)", ".search(/\\1/i) >= 0)", sCode)
    sCode = re.sub('(look\\(sx?[][.a-z:()]*), "\\(\\?i\\)([^"]+)"', "\\1, /\\2/i", sCode)
    sCode = re.sub('(look\\(sx?[][.a-z:()]*), "([^"]+)"', "\\1, /\\2/", sCode)
    sCode = re.sub('(look_chk1\\(dDA, sx?[][.a-z:()]*, [0-9a-z.()]+), "\\(\\?i\\)([^"]+)"', "\\1, /\\2/i", sCode)
    sCode = re.sub('(look_chk1\\(dDA, sx?[][.a-z:()]*, [0-9a-z.()]+), "([^"]+)"', "\\1, /\\2/i", sCode)
    sCode = re.sub('m\\.group\\((\\d+)\\) +in +(a[a-zA-Z]+)', "\\2.has(m[\\1])", sCode)
    sCode = sCode.replace("(?<!-)", "")  # todo

    # slices
    sCode = sCode.replace("[:m.start()]", ".slice(0,m.index)")
    sCode = sCode.replace("[m.end():]", ".slice(m.end[0])")
    sCode = sCode.replace("[m.start():m.end()]", ".slice(m.index, m.end[0])")


    sCode = re.sub("\\[(-?\\d+):(-?\\d+)\\]", ".slice(\\1,\\2)", sCode)
    sCode = re.sub("\\[(-?\\d+):\\]", ".slice(\\1)", sCode)
    sCode = re.sub("\\[:(-?\\d+)\\]", ".slice(0,\\1)", sCode)
    # regex matches
    sCode = sCode.replace(".end()", ".end[0]")
    sCode = sCode.replace(".start()", ".index")
    sCode = sCode.replace("m.group()", "m[0]")
    sCode = re.sub("\\.start\\((\\d+)\\)", ".start[\\1]", sCode)
    sCode = re.sub("m\\.group\\((\\d+)\\)", "m[\\1]", sCode)
    # tuples -> lists
    sCode = re.sub("\\((m\\.start\\[\\d+\\], m\\[\\d+\\])\\)", "[\\1]", sCode)
    # regex
    sCode = sCode.replace(r"\w[\w-]+", "[a-zA-Zà-öÀ-Ö0-9_ø-ÿØ-ßĀ-ʯfi-st][a-zA-Zà-öÀ-Ö0-9_ø-ÿØ-ßĀ-ʯfi-st-]+")
    sCode = sCode.replace(r"/\w/", "/[a-zA-Zà-öÀ-Ö0-9_ø-ÿØ-ßĀ-ʯfi-st]/")
    sCode = sCode.replace(r"[\w-]", "[a-zA-Zà-öÀ-Ö0-9_ø-ÿØ-ßĀ-ʯfi-st-]")
    sCode = sCode.replace(r"[\w,]", "[a-zA-Zà-öÀ-Ö0-9_ø-ÿØ-ßĀ-ʯfi-st,]")
    return sCode


def regex2js (sRegex, sWORDLIMITLEFT):
    "converts Python regex to JS regex and returns JS regex and list of negative lookbefore assertions"
    #   Latin letters: http://unicode-table.com/fr/
    #   0-9  and  _
................................................................................
        sRegex = sRegex + "i"
    if not lNegLookBeforeRegex:
        lNegLookBeforeRegex = None
    return (sRegex, lNegLookBeforeRegex)


def pyRuleToJS (lRule, dJSREGEXES, sWORDLIMITLEFT):

    lRuleJS = copy.deepcopy(lRule)



    del lRule[-1] # tGroups positioning codes are useless for Python
    # error messages
    for aAction in lRuleJS[6]:
        if aAction[1] == "-":
            aAction[2] = aAction[2].replace(" ", " ") # nbsp --> nnbsp
            aAction[4] = aAction[4].replace("« ", "« ").replace(" »", " »").replace(" :", " :").replace(" :", " :")
    # js regexes
    lRuleJS[1], lNegLookBehindRegex = regex2js(dJSREGEXES.get(lRuleJS[3], lRuleJS[1]), sWORDLIMITLEFT)
    lRuleJS.append(lNegLookBehindRegex)
    return lRuleJS


def writeRulesToJSArray (lRules):

    sArray = "[\n"
    for sOption, aRuleGroup in lRules:

        sArray += '  ["' + sOption + '", [\n'  if sOption  else  "  [false, [\n"
        for sRegex, bCaseInsensitive, sLineId, sRuleId, nPriority, lActions, aGroups, aNegLookBehindRegex in aRuleGroup:
            sArray += '    [' + sRegex + ", "
            sArray += "true, " if bCaseInsensitive  else "false, "
            sArray += '"' + sLineId + '", '
            sArray += '"' + sRuleId + '", '
            sArray += str(nPriority) + ", "
            sArray += json.dumps(lActions, ensure_ascii=False) + ", "
            sArray += json.dumps(aGroups, ensure_ascii=False) + ", "
            sArray += json.dumps(aNegLookBehindRegex, ensure_ascii=False) + "],\n"
        sArray += "  ]],\n"





    sArray += "]"
    return sArray


def groupsPositioningCodeToList (sGroupsPositioningCode):

    if not sGroupsPositioningCode:
        return None
    return [ int(sCode)  if sCode.isdigit() or (sCode[0:1] == "-" and sCode[1:].isdigit())  else sCode \
             for sCode in sGroupsPositioningCode.split(",") ]
>
|
>







 







|



>







 







<
<
<
<
<
<

<
>




>
>












<
<
|
|







 







>

>
>
>













>


>
|
|
|
|
|
|
|
|
|
|
|
>
>
>
>
>





>




1
2
3
4
5
6
7
8
9
10
..
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
..
42
43
44
45
46
47
48






49

50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68


69
70
71
72
73
74
75
76
77
...
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
"""
Convert Python code and regexes to JavaScript code
"""

import copy
import re
import json


def py2js (sCode):
................................................................................
    sCode = sCode.replace(" r'", " '")
    sCode = sCode.replace(',r"', ',"')
    sCode = sCode.replace(",r'", ",'")
    # operators
    sCode = sCode.replace(" and ", " && ")
    sCode = sCode.replace(" or ", " || ")
    sCode = re.sub("\\bnot\\b", "!", sCode)
    #sCode = re.sub("(.+) if (.+) else (.+)", "(\\2) ? \\1 : \\3", sCode)
    # boolean
    sCode = sCode.replace("False", "false")
    sCode = sCode.replace("True", "true")
    sCode = sCode.replace("None", "null")
    sCode = sCode.replace("bool", "Boolean")
    # methods
    sCode = sCode.replace(".__len__()", ".length")
    sCode = sCode.replace(".endswith", ".endsWith")
    sCode = sCode.replace(".find", ".indexOf")
    sCode = sCode.replace(".startswith", ".startsWith")
    sCode = sCode.replace(".lower", ".toLowerCase")
................................................................................
    sCode = sCode.replace(".strip", ".gl_trim")
    sCode = sCode.replace(".lstrip", ".gl_trimLeft")
    sCode = sCode.replace(".rstrip", ".gl_trimRight")
    sCode = sCode.replace('.replace("."', r".replace(/\./g")
    sCode = sCode.replace('.replace("..."', r".replace(/\.\.\./g")
    sCode = re.sub(r'.replace\("([^"]+)" ?,', ".replace(/\\1/g,", sCode)
    # regex






    sCode = re.sub('m\\.group\\((\\d+)\\) +in +(a[a-zA-Z]+)', "\\2.has(m[\\1])", sCode)

    sCode = re.sub('(lToken\\S+) +in +(a[a-zA-Z]+)', "\\2.has(\\1)", sCode)
    # slices
    sCode = sCode.replace("[:m.start()]", ".slice(0,m.index)")
    sCode = sCode.replace("[m.end():]", ".slice(m.end[0])")
    sCode = sCode.replace("[m.start():m.end()]", ".slice(m.index, m.end[0])")
    sCode = sCode.replace('[lToken[nLastToken]["nEnd"]:]', '.slice(lToken[nLastToken]["nEnd"])')
    sCode = sCode.replace('[:lToken[1+nTokenOffset]["nStart"]]', '.slice(0,lToken[1+nTokenOffset]["nStart"])')
    sCode = re.sub("\\[(-?\\d+):(-?\\d+)\\]", ".slice(\\1,\\2)", sCode)
    sCode = re.sub("\\[(-?\\d+):\\]", ".slice(\\1)", sCode)
    sCode = re.sub("\\[:(-?\\d+)\\]", ".slice(0,\\1)", sCode)
    # regex matches
    sCode = sCode.replace(".end()", ".end[0]")
    sCode = sCode.replace(".start()", ".index")
    sCode = sCode.replace("m.group()", "m[0]")
    sCode = re.sub("\\.start\\((\\d+)\\)", ".start[\\1]", sCode)
    sCode = re.sub("m\\.group\\((\\d+)\\)", "m[\\1]", sCode)
    # tuples -> lists
    sCode = re.sub("\\((m\\.start\\[\\d+\\], m\\[\\d+\\])\\)", "[\\1]", sCode)
    # regex


    sCode = sCode.replace(r"[\\w", "[a-zA-Zà-öÀ-Ö0-9_ø-ÿØ-ßĀ-ʯfi-stᴀ-ᶿ")
    sCode = sCode.replace(r"\\w", "[a-zA-Zà-öÀ-Ö0-9_ø-ÿØ-ßĀ-ʯfi-stᴀ-ᶿ]")
    return sCode


def regex2js (sRegex, sWORDLIMITLEFT):
    "converts Python regex to JS regex and returns JS regex and list of negative lookbefore assertions"
    #   Latin letters: http://unicode-table.com/fr/
    #   0-9  and  _
................................................................................
        sRegex = sRegex + "i"
    if not lNegLookBeforeRegex:
        lNegLookBeforeRegex = None
    return (sRegex, lNegLookBeforeRegex)


def pyRuleToJS (lRule, dJSREGEXES, sWORDLIMITLEFT):
    "modify Python rules -> JS rules"
    lRuleJS = copy.deepcopy(lRule)
    # graph rules
    if lRuleJS[0] == "@@@@":
        return lRuleJS
    del lRule[-1] # tGroups positioning codes are useless for Python
    # error messages
    for aAction in lRuleJS[6]:
        if aAction[1] == "-":
            aAction[2] = aAction[2].replace(" ", " ") # nbsp --> nnbsp
            aAction[4] = aAction[4].replace("« ", "« ").replace(" »", " »").replace(" :", " :").replace(" :", " :")
    # js regexes
    lRuleJS[1], lNegLookBehindRegex = regex2js(dJSREGEXES.get(lRuleJS[3], lRuleJS[1]), sWORDLIMITLEFT)
    lRuleJS.append(lNegLookBehindRegex)
    return lRuleJS


def writeRulesToJSArray (lRules):
    "create rules as a string of arrays (to be bundled in a JSON string)"
    sArray = "[\n"
    for sOption, aRuleGroup in lRules:
        if sOption != "@@@@":
            sArray += '  ["' + sOption + '", [\n'  if sOption  else  "  [false, [\n"
            for sRegex, bCaseInsensitive, sLineId, sRuleId, nPriority, lActions, aGroups, aNegLookBehindRegex in aRuleGroup:
                sArray += '    [' + sRegex + ", "
                sArray += "true, " if bCaseInsensitive  else "false, "
                sArray += '"' + sLineId + '", '
                sArray += '"' + sRuleId + '", '
                sArray += str(nPriority) + ", "
                sArray += json.dumps(lActions, ensure_ascii=False) + ", "
                sArray += json.dumps(aGroups, ensure_ascii=False) + ", "
                sArray += json.dumps(aNegLookBehindRegex, ensure_ascii=False) + "],\n"
            sArray += "  ]],\n"
        else:
            sArray += '  ["' + sOption + '", [\n'
            for sGraphName, sLineId in aRuleGroup:
                sArray += '    ["' + sGraphName + '", "' + sLineId + '"],\n'
            sArray += "  ]],\n"
    sArray += "]"
    return sArray


def groupsPositioningCodeToList (sGroupsPositioningCode):
    "convert <sGroupsPositioningCode> to a list of codes (numbers or strings)"
    if not sGroupsPositioningCode:
        return None
    return [ int(sCode)  if sCode.isdigit() or (sCode[0:1] == "-" and sCode[1:].isdigit())  else sCode \
             for sCode in sGroupsPositioningCode.split(",") ]

Added darg.py version [0a2000eec8].











































































































































































































































































































































































































































































































































>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
#!python3

"""
RULE GRAPH BUILDER
"""

# by Olivier R.
# License: MPL 2

import re
import traceback



class DARG:
    """DIRECT ACYCLIC RULE GRAPH"""
    # This code is inspired from Steve Hanov’s DAWG, 2011. (http://stevehanov.ca/blog/index.php?id=115)

    def __init__ (self, lRule, sLangCode):
        print(" > DARG", end="")

        # Preparing DARG
        self.sLangCode = sLangCode
        self.nRule = len(lRule)
        self.aPreviousRule = []
        Node.resetNextId()
        self.oRoot = Node()
        self.lUncheckedNodes = []  # list of nodes that have not been checked for duplication.
        self.lMinimizedNodes = {}  # list of unique nodes that have been checked for duplication.
        self.nNode = 0
        self.nArc = 0

        # build
        lRule.sort()
        for aRule in lRule:
            self.insert(aRule)
        self.finish()
        self.countNodes()
        self.countArcs()
        self.displayInfo()

    # BUILD DARG
    def insert (self, aRule):
        "insert a new rule (tokens must be inserted in order)"
        if aRule < self.aPreviousRule:
            exit("# Error: tokens must be inserted in order.")

        # find common prefix between word and previous word
        nCommonPrefix = 0
        for i in range(min(len(aRule), len(self.aPreviousRule))):
            if aRule[i] != self.aPreviousRule[i]:
                break
            nCommonPrefix += 1

        # Check the lUncheckedNodes for redundant nodes, proceeding from last
        # one down to the common prefix size. Then truncate the list at that point.
        self._minimize(nCommonPrefix)

        # add the suffix, starting from the correct node mid-way through the graph
        if len(self.lUncheckedNodes) == 0:
            oNode = self.oRoot
        else:
            oNode = self.lUncheckedNodes[-1][2]

        iToken = nCommonPrefix
        for sToken in aRule[nCommonPrefix:]:
            oNextNode = Node()
            oNode.dArcs[sToken] = oNextNode
            self.lUncheckedNodes.append((oNode, sToken, oNextNode))
            if iToken == (len(aRule) - 2):
                oNode.bFinal = True
            iToken += 1
            oNode = oNextNode
        oNode.bFinal = True
        self.aPreviousRule = aRule

    def finish (self):
        "minimize unchecked nodes"
        self._minimize(0)

    def _minimize (self, downTo):
        # proceed from the leaf up to a certain point
        for i in range( len(self.lUncheckedNodes)-1, downTo-1, -1 ):
            oNode, sToken, oChildNode = self.lUncheckedNodes[i]
            if oChildNode in self.lMinimizedNodes:
                # replace the child with the previously encountered one
                oNode.dArcs[sToken] = self.lMinimizedNodes[oChildNode]
            else:
                # add the state to the minimized nodes.
                self.lMinimizedNodes[oChildNode] = oChildNode
            self.lUncheckedNodes.pop()

    def countNodes (self):
        "count nodes within the whole graph"
        self.nNode = len(self.lMinimizedNodes)

    def countArcs (self):
        "count arcs within the whole graph"
        self.nArc = 0
        for oNode in self.lMinimizedNodes:
            self.nArc += len(oNode.dArcs)

    def displayInfo (self):
        "display informations about the rule graph"
        print(": {:>10,} rules,  {:>10,} nodes,  {:>10,} arcs".format(self.nRule, self.nNode, self.nArc))

    def createGraph (self):
        "create the graph as a dictionary"
        dGraph = { 0: self.oRoot.getNodeAsDict() }
        for oNode in self.lMinimizedNodes:
            sHashId = oNode.__hash__()
            if sHashId not in dGraph:
                dGraph[sHashId] = oNode.getNodeAsDict()
            else:
                print("Error. Double node… same id: ", sHashId)
                print(str(oNode.getNodeAsDict()))
        dGraph = self._rewriteKeysOfDARG(dGraph)
        self._sortActions(dGraph)
        self._checkRegexes(dGraph)
        return dGraph

    def _rewriteKeysOfDARG (self, dGraph):
        "keys of DARG are long numbers (hashes): this function replace these hashes with smaller numbers (to reduce storing size)"
        # create translation dictionary
        dKeyTrans = {}
        for i, nKey in enumerate(dGraph):
            dKeyTrans[nKey] = i
        # replace keys
        dNewGraph = {}
        for nKey, dVal in dGraph.items():
            dNewGraph[dKeyTrans[nKey]] = dVal
        for nKey, dVal in dGraph.items():
            for sArc, val in dVal.items():
                if type(val) is int:
                    dVal[sArc] = dKeyTrans[val]
                else:
                    for sArc, nKey in val.items():
                        val[sArc] = dKeyTrans[nKey]
        return dNewGraph

    def _sortActions (self, dGraph):
        "when a pattern is found, several actions may be launched, and it must be performed in a certain order"
        for nKey, dVal in dGraph.items():
            if "<rules>" in dVal:
                for sLineId, nKey in dVal["<rules>"].items():
                    # we change the dictionary of actions in a list of actions (values of dictionary all points to the final node)
                    if isinstance(dGraph[nKey], dict):
                        dGraph[nKey] = sorted(dGraph[nKey].keys())

    def _checkRegexes (self, dGraph):
        "check validity of regexes"
        aRegex = set()
        for nKey, dVal in dGraph.items():
            if "<re_value>" in dVal:
                for sRegex in dVal["<re_value>"]:
                    if sRegex not in aRegex:
                        self._checkRegex(sRegex)
                        aRegex.add(sRegex)
            if "<re_morph>" in dVal:
                for sRegex in dVal["<re_morph>"]:
                    if sRegex not in aRegex:
                        self._checkRegex(sRegex)
                        aRegex.add(sRegex)
        aRegex.clear()

    def _checkRegex (self, sRegex):
        #print(sRegex)
        if "¬" in sRegex:
            sPattern, sNegPattern = sRegex.split("¬")
            try:
                if not sNegPattern:
                    print("# Warning! Empty negpattern:", sRegex)
                re.compile(sPattern)
                if sNegPattern != "*":
                    re.compile(sNegPattern)
            except:
                print("# Error. Wrong regex:", sRegex)
                exit()
        else:
            try:
                if not sRegex:
                    print("# Warning! Empty pattern:", sRegex)
                re.compile(sRegex)
            except:
                print("# Error. Wrong regex:", sRegex)
                exit()


class Node:
    """Node of the rule graph"""

    NextId = 0

    def __init__ (self):
        self.i = Node.NextId
        Node.NextId += 1
        self.bFinal = False
        self.dArcs = {}          # key: arc value; value: a node

    @classmethod
    def resetNextId (cls):
        "reset to 0 the node counter"
        cls.NextId = 0

    def __str__ (self):
        # Caution! this function is used for hashing and comparison!
        cFinal = "1"  if self.bFinal  else "0"
        l = [cFinal]
        for (key, oNode) in self.dArcs.items():
            l.append(str(key))
            l.append(str(oNode.i))
        return "_".join(l)

    def __hash__ (self):
        # Used as a key in a python dictionary.
        return self.__str__().__hash__()

    def __eq__ (self, other):
        # Used as a key in a python dictionary.
        # Nodes are equivalent if they have identical arcs, and each identical arc leads to identical states.
        return self.__str__() == other.__str__()

    def getNodeAsDict (self):
        "returns the node as a dictionary structure"
        dNode = {}
        dReValue = {}
        dReMorph = {}
        dRule = {}
        dLemma = {}
        dMeta = {}
        dTag = {}
        for sArc, oNode in self.dArcs.items():
            if sArc.startswith("@") and len(sArc) > 1:
                dReMorph[sArc[1:]] = oNode.__hash__()
            elif sArc.startswith("~") and len(sArc) > 1:
                dReValue[sArc[1:]] = oNode.__hash__()
            elif sArc.startswith(">") and len(sArc) > 1:
                dLemma[sArc[1:]] = oNode.__hash__()
            elif sArc.startswith("*") and len(sArc) > 1:
                dMeta[sArc[1:]] = oNode.__hash__()
            elif sArc.startswith("/") and len(sArc) > 1:
                dTag[sArc[1:]] = oNode.__hash__()
            elif sArc.startswith("##"):
                dRule[sArc[1:]] = oNode.__hash__()
            else:
                dNode[sArc] = oNode.__hash__()
        if dReValue:
            dNode["<re_value>"] = dReValue
        if dReMorph:
            dNode["<re_morph>"] = dReMorph
        if dLemma:
            dNode["<lemmas>"] = dLemma
        if dTag:
            dNode["<tags>"] = dTag
        if dMeta:
            dNode["<meta>"] = dMeta
        if dRule:
            dNode["<rules>"] = dRule
        #if self.bFinal:
        #    dNode["<final>"] = 1
        return dNode

Modified gc_core/js/lang_core/gc_engine.js from [ab7d9a98c9] to [2a12fbd54b].

6
7
8
9
10
11
12
13
14
15

16
17
18
19
20
21
22
..
27
28
29
30
31
32
33
34
35

36

37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171



172
173
174

175




176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241

242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259

260
261
262
263



264
265
266

267
268
269
270
271
272
273
274
275
276
...
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
...
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480





































































































































































































































































































































































































































































































































































































































































































































481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533

534
535



536






















537
538






539
540
541















542
543




544
545



546
547
548




























549
550













































































































































551
552
553
554
555
556
557
558
559
560

561
562
563
564
565
566

567
568
569
570
571
572

573
574
575

576
577
578
579
580
581
582
583
584

585
586
587


588
589
590








591
592


















593
594
595
596

597
598









599











600
601
602
603
604
605










606
607
608
609
610
611
612
613
614



615
616



617
618
619
620
621
622
623
624
625

626


627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645


646

${string}
${regex}
${map}


if (typeof(require) !== 'undefined') {
    //var helpers = require("resource://grammalecte/graphspell/helpers.js");
    var gc_options = require("resource://grammalecte/${lang}/gc_options.js");
    var gc_rules = require("resource://grammalecte/${lang}/gc_rules.js");

    var cregex = require("resource://grammalecte/${lang}/cregex.js");
    var text = require("resource://grammalecte/text.js");
}


function capitalizeArray (aArray) {
    // can’t map on user defined function??
................................................................................
    return aNew;
}


// data
let _sAppContext = "";                                  // what software is running
let _dOptions = null;
let _aIgnoredRules = new Set();
let _oSpellChecker = null;

let _dAnalyses = new Map();                             // cache for data from dictionary



var gc_engine = {

    //// Informations

    lang: "${lang}",
    locales: ${loc},
    pkg: "${implname}",
    name: "${name}",
    version: "${version}",
    author: "${author}",

    //// Parsing

    parse: function (sText, sCountry="${country_default}", bDebug=false, bContext=false) {
        // analyses the paragraph sText and returns list of errors
        let dErrors;
        let errs;
        let sAlt = sText;
        let dDA = new Map();        // Disamnbiguator
        let dPriority = new Map();  // Key = position; value = priority
        let sNew = "";

        // parse paragraph
        try {
            [sNew, dErrors] = this._proofread(sText, sAlt, 0, true, dDA, dPriority, sCountry, bDebug, bContext);
            if (sNew) {
                sText = sNew;
            }
        }
        catch (e) {
            console.error(e);
        }

        // cleanup
        if (sText.includes(" ")) {
            sText = sText.replace(/ /g, ' '); // nbsp
        }
        if (sText.includes(" ")) {
            sText = sText.replace(/ /g, ' '); // snbsp
        }
        if (sText.includes("'")) {
            sText = sText.replace(/'/g, "’");
        }
        if (sText.includes("‑")) {
            sText = sText.replace(/‑/g, "-"); // nobreakdash
        }

        // parse sentence
        for (let [iStart, iEnd] of this._getSentenceBoundaries(sText)) {
            if (4 < (iEnd - iStart) < 2000) {
                dDA.clear();
                try {
                    [, errs] = this._proofread(sText.slice(iStart, iEnd), sAlt.slice(iStart, iEnd), iStart, false, dDA, dPriority, sCountry, bDebug, bContext);
                    dErrors.gl_update(errs);
                }
                catch (e) {
                    console.error(e);
                }
            }
        }
        return Array.from(dErrors.values());
    },

    _zEndOfSentence: new RegExp ('([.?!:;…][ .?!… »”")]*|.$)', "g"),
    _zBeginOfParagraph: new RegExp ("^[-  –—.,;?!…]*", "ig"),
    _zEndOfParagraph: new RegExp ("[-  .,;?!…–—]*$", "ig"),

    _getSentenceBoundaries: function* (sText) {
        let mBeginOfSentence = this._zBeginOfParagraph.exec(sText);
        let iStart = this._zBeginOfParagraph.lastIndex;
        let m;
        while ((m = this._zEndOfSentence.exec(sText)) !== null) {
            yield [iStart, this._zEndOfSentence.lastIndex];
            iStart = this._zEndOfSentence.lastIndex;
        }
    },

    _proofread: function (s, sx, nOffset, bParagraph, dDA, dPriority, sCountry, bDebug, bContext) {
        let dErrs = new Map();
        let bChange = false;
        let bIdRule = option('idrule');
        let m;
        let bCondMemo;
        let nErrorStart;

        for (let [sOption, lRuleGroup] of this._getRules(bParagraph)) {
            if (!sOption || option(sOption)) {
                for (let [zRegex, bUppercase, sLineId, sRuleId, nPriority, lActions, lGroups, lNegLookBefore] of lRuleGroup) {
                    if (!_aIgnoredRules.has(sRuleId)) {
                        while ((m = zRegex.gl_exec2(s, lGroups, lNegLookBefore)) !== null) {
                            bCondMemo = null;
                            /*if (bDebug) {
                                console.log(">>>> Rule # " + sLineId + " - Text: " + s + " opt: "+ sOption);
                            }*/
                            for (let [sFuncCond, cActionType, sWhat, ...eAct] of lActions) {
                            // action in lActions: [ condition, action type, replacement/suggestion/action[, iGroup[, message, URL]] ]
                                try {
                                    //console.log(oEvalFunc[sFuncCond]);
                                    bCondMemo = (!sFuncCond || oEvalFunc[sFuncCond](s, sx, m, dDA, sCountry, bCondMemo));
                                    if (bCondMemo) {
                                        switch (cActionType) {
                                            case "-":
                                                // grammar error
                                                //console.log("-> error detected in " + sLineId + "\nzRegex: " + zRegex.source);
                                                nErrorStart = nOffset + m.start[eAct[0]];
                                                if (!dErrs.has(nErrorStart) || nPriority > dPriority.get(nErrorStart)) {
                                                    dErrs.set(nErrorStart, this._createError(s, sx, sWhat, nOffset, m, eAct[0], sLineId, sRuleId, bUppercase, eAct[1], eAct[2], bIdRule, sOption, bContext));
                                                    dPriority.set(nErrorStart, nPriority);
                                                }
                                                break;
                                            case "~":
                                                // text processor
                                                //console.log("-> text processor by " + sLineId + "\nzRegex: " + zRegex.source);
                                                s = this._rewrite(s, sWhat, eAct[0], m, bUppercase);
                                                bChange = true;
                                                if (bDebug) {
                                                    console.log("~ " + s + "  -- " + m[eAct[0]] + "  # " + sLineId);
                                                }
                                                break;
                                            case "=":
                                                // disambiguation
                                                //console.log("-> disambiguation by " + sLineId + "\nzRegex: " + zRegex.source);
                                                oEvalFunc[sWhat](s, m, dDA);
                                                if (bDebug) {
                                                    console.log("= " + m[0] + "  # " + sLineId + "\nDA: " + dDA.gl_toString());
                                                }
                                                break;
                                            case ">":
                                                // we do nothing, this test is just a condition to apply all following actions
                                                break;
                                            default:
                                                console.log("# error: unknown action at " + sLineId);
                                        }



                                    } else {
                                        if (cActionType == ">") {
                                            break;

                                        }




                                    }
                                }
                                catch (e) {
                                    console.log(s);
                                    console.log("# line id: " + sLineId + "\n# rule id: " + sRuleId);
                                    console.error(e);
                                }
                            }
                        }
                    }
                }
            }
        }
        if (bChange) {
            return [s, dErrs];
        }
        return [false, dErrs];
    },

    _createError: function (s, sx, sRepl, nOffset, m, iGroup, sLineId, sRuleId, bUppercase, sMsg, sURL, bIdRule, sOption, bContext) {
        let oErr = {};
        oErr["nStart"] = nOffset + m.start[iGroup];
        oErr["nEnd"] = nOffset + m.end[iGroup];
        oErr["sLineId"] = sLineId;
        oErr["sRuleId"] = sRuleId;
        oErr["sType"] = (sOption) ? sOption : "notype";
        // suggestions
        if (sRepl.slice(0,1) === "=") {
            let sugg = oEvalFunc[sRepl.slice(1)](s, m);
            if (sugg) {
                if (bUppercase && m[iGroup].slice(0,1).gl_isUpperCase()) {
                    oErr["aSuggestions"] = capitalizeArray(sugg.split("|"));
                } else {
                    oErr["aSuggestions"] = sugg.split("|");
                }
            } else {
                oErr["aSuggestions"] = [];
            }
        } else if (sRepl == "_") {
            oErr["aSuggestions"] = [];
        } else {
            if (bUppercase && m[iGroup].slice(0,1).gl_isUpperCase()) {
                oErr["aSuggestions"] = capitalizeArray(sRepl.gl_expand(m).split("|"));
            } else {
                oErr["aSuggestions"] = sRepl.gl_expand(m).split("|");
            }
        }
        // Message
        let sMessage = "";
        if (sMsg.slice(0,1) === "=") {
            sMessage = oEvalFunc[sMsg.slice(1)](s, m);
        } else {
            sMessage = sMsg.gl_expand(m);
        }
        if (bIdRule) {
            sMessage += " ##" + sLineId + " #" + sRuleId;
        }
        oErr["sMessage"] = sMessage;
        // URL
        oErr["URL"] = sURL || "";
        // Context
        if (bContext) {
            oErr["sUnderlined"] = sx.slice(m.start[iGroup], m.end[iGroup]);
            oErr["sBefore"] = sx.slice(Math.max(0, m.start[iGroup]-80), m.start[iGroup]);
            oErr["sAfter"] = sx.slice(m.end[iGroup], m.end[iGroup]+80);
        }

        return oErr;
    },

    _rewrite: function (s, sRepl, iGroup, m, bUppercase) {
        // text processor: write sRepl in s at iGroup position"
        let ln = m.end[iGroup] - m.start[iGroup];
        let sNew = "";
        if (sRepl === "*") {
            sNew = " ".repeat(ln);
        } else if (sRepl === ">" || sRepl === "_" || sRepl === "~") {
            sNew = sRepl + " ".repeat(ln-1);
        } else if (sRepl === "@") {
            sNew = "@".repeat(ln);
        } else if (sRepl.slice(0,1) === "=") {
            sNew = oEvalFunc[sRepl.slice(1)](s, m);
            sNew = sNew + " ".repeat(ln-sNew.length);
            if (bUppercase && m[iGroup].slice(0,1).gl_isUpperCase()) {
                sNew = sNew.gl_toCapitalize();

            }
        } else {
            sNew = sRepl.gl_expand(m);
            sNew = sNew + " ".repeat(ln-sNew.length);



        }
        //console.log("\n"+s+"\nstart: "+m.start[iGroup]+" end:"+m.end[iGroup])
        return s.slice(0, m.start[iGroup]) + sNew + s.slice(m.end[iGroup]);

    },

    // Actions on rules

    ignoreRule: function (sRuleId) {
        _aIgnoredRules.add(sRuleId);
    },

    resetIgnoreRules: function () {
        _aIgnoredRules.clear();
................................................................................
    reactivateRule: function (sRuleId) {
        _aIgnoredRules.delete(sRuleId);
    },

    listRules: function* (sFilter=null) {
        // generator: returns tuple (sOption, sLineId, sRuleId)
        try {
            for (let [sOption, lRuleGroup] of this._getRules(true)) {
                for (let [,, sLineId, sRuleId,,] of lRuleGroup) {
                    if (!sFilter || sRuleId.test(sFilter)) {
                        yield [sOption, sLineId, sRuleId];
                    }
                }
            }
            for (let [sOption, lRuleGroup] of this._getRules(false)) {
                for (let [,, sLineId, sRuleId,,] of lRuleGroup) {
                    if (!sFilter || sRuleId.test(sFilter)) {
                        yield [sOption, sLineId, sRuleId];
                    }
                }
            }
        }
        catch (e) {
            console.error(e);
        }
    },

    _getRules: function (bParagraph) {
        if (!bParagraph) {
            return gc_rules.lSentenceRules;
        }
        return gc_rules.lParagraphRules;
    },

    //// Initialization

    load: function (sContext="JavaScript", sPath="") {
        try {
            if (typeof(require) !== 'undefined') {
                var spellchecker = require("resource://grammalecte/graphspell/spellchecker.js");
                _oSpellChecker = new spellchecker.SpellChecker("${lang}", "", "${dic_main_filename_js}", "${dic_extended_filename_js}", "${dic_community_filename_js}", "${dic_personal_filename_js}");
            } else {
                _oSpellChecker = new SpellChecker("${lang}", sPath, "${dic_main_filename_js}", "${dic_extended_filename_js}", "${dic_community_filename_js}", "${dic_personal_filename_js}");
            }
            _sAppContext = sContext;
            _dOptions = gc_options.getOptions(sContext).gl_shallowCopy();     // duplication necessary, to be able to reset to default
        }
        catch (e) {
            console.error(e);
        }
    },

    getSpellChecker: function () {
        return _oSpellChecker;
    },

    //// Options

    setOption: function (sOpt, bVal) {
        if (_dOptions.has(sOpt)) {
            _dOptions.set(sOpt, bVal);
        }
    },
................................................................................

    getDefaultOptions: function () {
        return gc_options.getOptions(_sAppContext).gl_shallowCopy();
    },

    resetOptions: function () {
        _dOptions = gc_options.getOptions(_sAppContext).gl_shallowCopy();
    }
};


//////// Common functions

function option (sOpt) {
    // return true if option sOpt is active
    return _dOptions.get(sOpt);
}

function displayInfo (dDA, aWord) {
    // for debugging: info of word
    if (!aWord) {
        console.log("> nothing to find");
        return true;
    }
    if (!_dAnalyses.has(aWord[1]) && !_storeMorphFromFSA(aWord[1])) {
        console.log("> not in FSA");
        return true;
    }
    if (dDA.has(aWord[0])) {
        console.log("DA: " + dDA.get(aWord[0]));
    }
    console.log("FSA: " + _dAnalyses.get(aWord[1]));
    return true;
}

function _storeMorphFromFSA (sWord) {
    // retrieves morphologies list from _oSpellChecker -> _dAnalyses
    //console.log("register: "+sWord + " " + _oSpellChecker.getMorph(sWord).toString())
    _dAnalyses.set(sWord, _oSpellChecker.getMorph(sWord));
    return !!_dAnalyses.get(sWord);
}

function morph (dDA, aWord, sPattern, bStrict=true, bNoWord=false) {
    // analyse a tuple (position, word), return true if sPattern in morphologies (disambiguation on)
    if (!aWord) {
        //console.log("morph: noword, returns " + bNoWord);
        return bNoWord;
    }
    //console.log("aWord: "+aWord.toString());
    if (!_dAnalyses.has(aWord[1]) && !_storeMorphFromFSA(aWord[1])) {
        return false;
    }
    let lMorph = dDA.has(aWord[0]) ? dDA.get(aWord[0]) : _dAnalyses.get(aWord[1]);
    //console.log("lMorph: "+lMorph.toString());
    if (lMorph.length === 0) {
        return false;
    }
    //console.log("***");
    if (bStrict) {
        return lMorph.every(s  =>  (s.search(sPattern) !== -1));
    }
    return lMorph.some(s  =>  (s.search(sPattern) !== -1));
}

function morphex (dDA, aWord, sPattern, sNegPattern, bNoWord=false) {
    // analyse a tuple (position, word), returns true if not sNegPattern in word morphologies and sPattern in word morphologies (disambiguation on)
    if (!aWord) {
        //console.log("morph: noword, returns " + bNoWord);
        return bNoWord;
    }
    //console.log("aWord: "+aWord.toString());
    if (!_dAnalyses.has(aWord[1]) && !_storeMorphFromFSA(aWord[1])) {
        return false;
    }
    let lMorph = dDA.has(aWord[0]) ? dDA.get(aWord[0]) : _dAnalyses.get(aWord[1]);
    //console.log("lMorph: "+lMorph.toString());
    if (lMorph.length === 0) {
        return false;
    }
    //console.log("***");
    // check negative condition
    if (lMorph.some(s  =>  (s.search(sNegPattern) !== -1))) {
        return false;
    }
    // search sPattern
    return lMorph.some(s  =>  (s.search(sPattern) !== -1));
}

function analyse (sWord, sPattern, bStrict=true) {
    // analyse a word, return true if sPattern in morphologies (disambiguation off)
    if (!_dAnalyses.has(sWord) && !_storeMorphFromFSA(sWord)) {
        return false;
    }
    if (bStrict) {
        return _dAnalyses.get(sWord).every(s  =>  (s.search(sPattern) !== -1));
    }
    return _dAnalyses.get(sWord).some(s  =>  (s.search(sPattern) !== -1));
}

function analysex (sWord, sPattern, sNegPattern) {
    // analyse a word, returns True if not sNegPattern in word morphologies and sPattern in word morphologies (disambiguation off)
    if (!_dAnalyses.has(sWord) && !_storeMorphFromFSA(sWord)) {
        return false;
    }
    // check negative condition
    if (_dAnalyses.get(sWord).some(s  =>  (s.search(sNegPattern) !== -1))) {
        return false;
    }
    // search sPattern
    return _dAnalyses.get(sWord).some(s  =>  (s.search(sPattern) !== -1));
}

function stem (sWord) {
    // returns a list of sWord's stems
    if (!sWord) {
        return [];
    }
    if (!_dAnalyses.has(sWord) && !_storeMorphFromFSA(sWord)) {
        return [];
    }
    return _dAnalyses.get(sWord).map( s => s.slice(1, s.indexOf(" ")) );
}


//// functions to get text outside pattern scope

// warning: check compile_rules.py to understand how it works

function nextword (s, iStart, n) {
    // get the nth word of the input string or empty string
    let z = new RegExp("^(?: +[a-zà-öA-Zø-ÿÀ-Ö0-9Ø-ßĀ-ʯfi-st%_-]+){" + (n-1).toString() + "} +([a-zà-öA-Zø-ÿÀ-Ö0-9Ø-ßĀ-ʯfi-st%_-]+)", "ig");





































































































































































































































































































































































































































































































































































































































































































































    let m = z.exec(s.slice(iStart));
    if (!m) {
        return null;
    }
    return [iStart + z.lastIndex - m[1].length, m[1]];
}

function prevword (s, iEnd, n) {
    // get the (-)nth word of the input string or empty string
    let z = new RegExp("([a-zà-öA-Zø-ÿÀ-Ö0-9Ø-ßĀ-ʯfi-st%_-]+) +(?:[a-zà-öA-Zø-ÿÀ-Ö0-9Ø-ßĀ-ʯfi-st%_-]+ +){" + (n-1).toString() + "}$", "i");
    let m = z.exec(s.slice(0, iEnd));
    if (!m) {
        return null;
    }
    return [m.index, m[1]];
}

function nextword1 (s, iStart) {
    // get next word (optimization)
    let _zNextWord = new RegExp ("^ +([a-zà-öA-Zø-ÿÀ-Ö0-9Ø-ßĀ-ʯfi-st_][a-zà-öA-Zø-ÿÀ-Ö0-9Ø-ßĀ-ʯfi-st_-]*)", "ig");
    let m = _zNextWord.exec(s.slice(iStart));
    if (!m) {
        return null;
    }
    return [iStart + _zNextWord.lastIndex - m[1].length, m[1]];
}

const _zPrevWord = new RegExp ("([a-zà-öA-Zø-ÿÀ-Ö0-9Ø-ßĀ-ʯfi-st_][a-zà-öA-Zø-ÿÀ-Ö0-9Ø-ßĀ-ʯfi-st_-]*) +$", "i");

function prevword1 (s, iEnd) {
    // get previous word (optimization)
    let m = _zPrevWord.exec(s.slice(0, iEnd));
    if (!m) {
        return null;
    }
    return [m.index, m[1]];
}

function look (s, zPattern, zNegPattern=null) {
    // seek zPattern in s (before/after/fulltext), if antipattern zNegPattern not in s
    try {
        if (zNegPattern && zNegPattern.test(s)) {
            return false;
        }
        return zPattern.test(s);
    }
    catch (e) {
        console.error(e);
    }
    return false;
}

function look_chk1 (dDA, s, nOffset, zPattern, sPatternGroup1, sNegPatternGroup1=null) {

    // returns True if s has pattern zPattern and m.group(1) has pattern sPatternGroup1
    let m = zPattern.gl_exec2(s, null);



    if (!m) {






















        return false;
    }






    try {
        let sWord = m[1];
        let nPos = m.start[1] + nOffset;















        if (sNegPatternGroup1) {
            return morphex(dDA, [nPos, sWord], sPatternGroup1, sNegPatternGroup1);




        }
        return morph(dDA, [nPos, sWord], sPatternGroup1, false);



    }
    catch (e) {
        console.error(e);




























        return false;
    }













































































































































}


//////// Disambiguator

function select (dDA, nPos, sWord, sPattern, lDefault=null) {
    if (!sWord) {
        return true;
    }
    if (dDA.has(nPos)) {

        return true;
    }
    if (!_dAnalyses.has(sWord) && !_storeMorphFromFSA(sWord)) {
        return true;
    }
    if (_dAnalyses.get(sWord).length === 1) {

        return true;
    }
    let lSelect = _dAnalyses.get(sWord).filter( sMorph => sMorph.search(sPattern) !== -1 );
    if (lSelect.length > 0) {
        if (lSelect.length != _dAnalyses.get(sWord).length) {
            dDA.set(nPos, lSelect);

        }
    } else if (lDefault) {
        dDA.set(nPos, lDefaul);

    }
    return true;
}

function exclude (dDA, nPos, sWord, sPattern, lDefault=null) {
    if (!sWord) {
        return true;
    }
    if (dDA.has(nPos)) {

        return true;
    }
    if (!_dAnalyses.has(sWord) && !_storeMorphFromFSA(sWord)) {


        return true;
    }
    if (_dAnalyses.get(sWord).length === 1) {








        return true;
    }


















    let lSelect = _dAnalyses.get(sWord).filter( sMorph => sMorph.search(sPattern) === -1 );
    if (lSelect.length > 0) {
        if (lSelect.length != _dAnalyses.get(sWord).length) {
            dDA.set(nPos, lSelect);

        }
    } else if (lDefault) {









        dDA.set(nPos, lDefault);











    }
    return true;
}

function define (dDA, nPos, lMorph) {
    dDA.set(nPos, lMorph);










    return true;
}


//////// GRAMMAR CHECKER PLUGINS

${pluginsJS}





${callablesJS}






if (typeof(exports) !== 'undefined') {
    exports.lang = gc_engine.lang;
    exports.locales = gc_engine.locales;
    exports.pkg = gc_engine.pkg;
    exports.name = gc_engine.name;
    exports.version = gc_engine.version;
    exports.author = gc_engine.author;

    exports.parse = gc_engine.parse;


    exports._zEndOfSentence = gc_engine._zEndOfSentence;
    exports._zBeginOfParagraph = gc_engine._zBeginOfParagraph;
    exports._zEndOfParagraph = gc_engine._zEndOfParagraph;
    exports._getSentenceBoundaries = gc_engine._getSentenceBoundaries;
    exports._proofread = gc_engine._proofread;
    exports._createError = gc_engine._createError;
    exports._rewrite = gc_engine._rewrite;
    exports.ignoreRule = gc_engine.ignoreRule;
    exports.resetIgnoreRules = gc_engine.resetIgnoreRules;
    exports.reactivateRule = gc_engine.reactivateRule;
    exports.listRules = gc_engine.listRules;
    exports._getRules = gc_engine._getRules;
    exports.load = gc_engine.load;
    exports.getSpellChecker = gc_engine.getSpellChecker;
    exports.setOption = gc_engine.setOption;
    exports.setOptions = gc_engine.setOptions;
    exports.getOptions = gc_engine.getOptions;
    exports.getDefaultOptions = gc_engine.getDefaultOptions;
    exports.resetOptions = gc_engine.resetOptions;


}







<


>







 







<

>
|
>













|

|
<
<
<
<
<
<
<
<
<

<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
>
>
>
|
<
<
>
|
>
>
>
>
|
<
|
<
<
|
|
|
|
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
>
|


<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
>
|
<
<
<
>
>
>

<
<
>

<
<







 







|






|












<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<







 







|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>









|









|







|










|
|

|


|







<
>
|
<
>
>
>
|
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>


>
>
>
>
>
>
|
<
<
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
|
<
>
>
>
>

<
>
>
>
|
<
<
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
|
|
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>





|



|
>


<
|
<
<
>


|

|
<
>


<
>




|



|
>


<
>
>


<
>
>
>
>
>
>
>
>
|
|
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
|

|
<
>


>
>
>
>
>
>
>
>
>
|
>
>
>
>
>
>
>
>
>
>
>




|
|
>
>
>
>
>
>
>
>
>
>









>
>
>


>
>
>









>
|
>
>



|
|
<
<




|
|
<





>
>

6
7
8
9
10
11
12

13
14
15
16
17
18
19
20
21
22
..
27
28
29
30
31
32
33

34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53









54













































































































55
56
57
58


59
60
61
62
63
64
65

66


67
68
69
70

























































71
72
73
74















75
76



77
78
79
80


81
82


83
84
85
86
87
88
89
..
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118





























119
120
121
122
123
124
125
...
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025

1026
1027

1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062


1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078

1079
1080
1081
1082
1083

1084
1085
1086
1087


1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271

1272


1273
1274
1275
1276
1277
1278

1279
1280
1281

1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294

1295
1296
1297
1298

1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329

1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404


1405
1406
1407
1408
1409
1410

1411
1412
1413
1414
1415
1416
1417
1418

${string}
${regex}
${map}


if (typeof(require) !== 'undefined') {

    var gc_options = require("resource://grammalecte/${lang}/gc_options.js");
    var gc_rules = require("resource://grammalecte/${lang}/gc_rules.js");
    var gc_rules_graph = require("resource://grammalecte/${lang}/gc_rules_graph.js");
    var cregex = require("resource://grammalecte/${lang}/cregex.js");
    var text = require("resource://grammalecte/text.js");
}


function capitalizeArray (aArray) {
    // can’t map on user defined function??
................................................................................
    return aNew;
}


// data
let _sAppContext = "";                                  // what software is running
let _dOptions = null;

let _oSpellChecker = null;
let _oTokenizer = null;
let _aIgnoredRules = new Set();



var gc_engine = {

    //// Informations

    lang: "${lang}",
    locales: ${loc},
    pkg: "${implname}",
    name: "${name}",
    version: "${version}",
    author: "${author}",

    //// Initialization

    load: function (sContext="JavaScript", sPath="") {









        try {













































































































            if (typeof(require) !== 'undefined') {
                var spellchecker = require("resource://grammalecte/graphspell/spellchecker.js");
                _oSpellChecker = new spellchecker.SpellChecker("${lang}", "", "${dic_main_filename_js}", "${dic_extended_filename_js}", "${dic_community_filename_js}", "${dic_personal_filename_js}");
            } else {


                _oSpellChecker = new SpellChecker("${lang}", sPath, "${dic_main_filename_js}", "${dic_extended_filename_js}", "${dic_community_filename_js}", "${dic_personal_filename_js}");
            }
            _sAppContext = sContext;
            _dOptions = gc_options.getOptions(sContext).gl_shallowCopy();     // duplication necessary, to be able to reset to default
            _oTokenizer = _oSpellChecker.getTokenizer();
            _oSpellChecker.activateStorage();
        }

        catch (e) {


            console.error(e);
        }
    },


























































    getSpellChecker: function () {
        return _oSpellChecker;
    },
















    //// Rules




    getRules: function (bParagraph) {
        if (!bParagraph) {
            return gc_rules.lSentenceRules;
        }


        return gc_rules.lParagraphRules;
    },



    ignoreRule: function (sRuleId) {
        _aIgnoredRules.add(sRuleId);
    },

    resetIgnoreRules: function () {
        _aIgnoredRules.clear();
................................................................................
    reactivateRule: function (sRuleId) {
        _aIgnoredRules.delete(sRuleId);
    },

    listRules: function* (sFilter=null) {
        // generator: returns tuple (sOption, sLineId, sRuleId)
        try {
            for (let [sOption, lRuleGroup] of this.getRules(true)) {
                for (let [,, sLineId, sRuleId,,] of lRuleGroup) {
                    if (!sFilter || sRuleId.test(sFilter)) {
                        yield [sOption, sLineId, sRuleId];
                    }
                }
            }
            for (let [sOption, lRuleGroup] of this.getRules(false)) {
                for (let [,, sLineId, sRuleId,,] of lRuleGroup) {
                    if (!sFilter || sRuleId.test(sFilter)) {
                        yield [sOption, sLineId, sRuleId];
                    }
                }
            }
        }
        catch (e) {
            console.error(e);
        }
    },






























    //// Options

    setOption: function (sOpt, bVal) {
        if (_dOptions.has(sOpt)) {
            _dOptions.set(sOpt, bVal);
        }
    },
................................................................................

    getDefaultOptions: function () {
        return gc_options.getOptions(_sAppContext).gl_shallowCopy();
    },

    resetOptions: function () {
        _dOptions = gc_options.getOptions(_sAppContext).gl_shallowCopy();
    },

    //// Parsing

    parse: function (sText, sCountry="${country_default}", bDebug=false, dOptions=null, bContext=false) {
        let oText = new TextParser(sText);
        return oText.parse(sCountry, bDebug, dOptions, bContext);
    },

    _zEndOfSentence: new RegExp ('([.?!:;…][ .?!… »”")]*|.$)', "g"),
    _zBeginOfParagraph: new RegExp ("^[-  –—.,;?!…]*", "ig"),
    _zEndOfParagraph: new RegExp ("[-  .,;?!…–—]*$", "ig"),

    getSentenceBoundaries: function* (sText) {
        let mBeginOfSentence = this._zBeginOfParagraph.exec(sText);
        let iStart = this._zBeginOfParagraph.lastIndex;
        let m;
        while ((m = this._zEndOfSentence.exec(sText)) !== null) {
            yield [iStart, this._zEndOfSentence.lastIndex];
            iStart = this._zEndOfSentence.lastIndex;
        }
    }
};


class TextParser {

    constructor (sText) {
        this.sText = sText;
        this.sText0 = sText;
        this.sSentence = "";
        this.sSentence0 = "";
        this.nOffsetWithinParagraph = 0;
        this.lToken = [];
        this.dTokenPos = new Map();
        this.dTags = new Map();
        this.dError = new Map();
        this.dErrorPriority = new Map();  // Key = position; value = priority
    }

    asString () {
        let s = "===== TEXT =====\n"
        s += "sentence: " + this.sSentence0 + "\n";
        s += "now:      " + this.sSentence  + "\n";
        for (let dToken of this.lToken) {
            s += `#${dToken["i"]}\t${dToken["nStart"]}:${dToken["nEnd"]}\t${dToken["sValue"]}\t${dToken["sType"]}`;
            if (dToken.hasOwnProperty("lMorph")) {
                s += "\t" + dToken["lMorph"].toString();
            }
            if (dToken.hasOwnProperty("aTags")) {
                s += "\t" + dToken["aTags"].toString();
            }
            s += "\n";
        }
        return s;
    }

    parse (sCountry="${country_default}", bDebug=false, dOptions=null, bContext=false) {
        // analyses the paragraph sText and returns list of errors
        let dOpt = dOptions || _dOptions;
        let bShowRuleId = option('idrule');
        // parse paragraph
        try {
            this.parseText(this.sText, this.sText0, true, 0, sCountry, dOpt, bShowRuleId, bDebug, bContext);
        }
        catch (e) {
            console.error(e);
        }

        // cleanup
        if (this.sText.includes(" ")) {
            this.sText = this.sText.replace(/ /g, ' '); // nbsp
        }
        if (this.sText.includes(" ")) {
            this.sText = this.sText.replace(/ /g, ' '); // snbsp
        }
        if (this.sText.includes("'")) {
            this.sText = this.sText.replace(/'/g, "’");
        }
        if (this.sText.includes("‑")) {
            this.sText = this.sText.replace(/‑/g, "-"); // nobreakdash
        }

        // parse sentence
        for (let [iStart, iEnd] of gc_engine.getSentenceBoundaries(this.sText)) {
            try {
                this.sSentence = this.sText.slice(iStart, iEnd);
                this.sSentence0 = this.sText0.slice(iStart, iEnd);
                this.nOffsetWithinParagraph = iStart;
                this.lToken = Array.from(_oTokenizer.genTokens(this.sSentence, true));
                this.dTokenPos.clear();
                for (let dToken of this.lToken) {
                    if (dToken["sType"] != "INFO") {
                        this.dTokenPos.set(dToken["nStart"], dToken);
                    }
                }
                this.parseText(this.sSentence, this.sSentence0, false, iStart, sCountry, dOpt, bShowRuleId, bDebug, bContext);
            }
            catch (e) {
                console.error(e);
            }
        }
        return Array.from(this.dError.values());
    }

    parseText (sText, sText0, bParagraph, nOffset, sCountry, dOptions, bShowRuleId, bDebug, bContext) {
        let bChange = false;
        let m;

        for (let [sOption, lRuleGroup] of gc_engine.getRules(bParagraph)) {
            if (sOption == "@@@@") {
                // graph rules
                if (!bParagraph && bChange) {
                    this.update(sText, bDebug);
                    bChange = false;
                }
                for (let [sGraphName, sLineId] of lRuleGroup) {
                    if (!dOptions.has(sGraphName) || dOptions.get(sGraphName)) {
                        if (bDebug) {
                            console.log(">>>> GRAPH: " + sGraphName + " " + sLineId);
                        }
                        sText = this.parseGraph(gc_rules_graph.dAllGraph[sGraphName], sCountry, dOptions, bShowRuleId, bDebug, bContext);
                    }
                }
            }
            else if (!sOption || option(sOption)) {
                for (let [zRegex, bUppercase, sLineId, sRuleId, nPriority, lActions, lGroups, lNegLookBefore] of lRuleGroup) {
                    if (!_aIgnoredRules.has(sRuleId)) {
                        while ((m = zRegex.gl_exec2(sText, lGroups, lNegLookBefore)) !== null) {
                            let bCondMemo = null;
                            for (let [sFuncCond, cActionType, sWhat, ...eAct] of lActions) {
                                // action in lActions: [ condition, action type, replacement/suggestion/action[, iGroup[, message, URL]] ]
                                try {
                                    bCondMemo = (!sFuncCond || oEvalFunc[sFuncCond](sText, sText0, m, this.dTokenPos, sCountry, bCondMemo));
                                    if (bCondMemo) {
                                        switch (cActionType) {
                                            case "-":
                                                // grammar error
                                                //console.log("-> error detected in " + sLineId + "\nzRegex: " + zRegex.source);
                                                let nErrorStart = nOffset + m.start[eAct[0]];
                                                if (!this.dError.has(nErrorStart) || nPriority > this.dErrorPriority.get(nErrorStart)) {
                                                    this.dError.set(nErrorStart, this._createErrorFromRegex(sText, sText0, sWhat, nOffset, m, eAct[0], sLineId, sRuleId, bUppercase, eAct[1], eAct[2], bShowRuleId, sOption, bContext));
                                                    this.dErrorPriority.set(nErrorStart, nPriority);
                                                }
                                                break;
                                            case "~":
                                                // text processor
                                                //console.log("-> text processor by " + sLineId + "\nzRegex: " + zRegex.source);
                                                sText = this.rewriteText(sText, sWhat, eAct[0], m, bUppercase);
                                                bChange = true;
                                                if (bDebug) {
                                                    console.log("~ " + sText + "  -- " + m[eAct[0]] + "  # " + sLineId);
                                                }
                                                break;
                                            case "=":
                                                // disambiguation
                                                //console.log("-> disambiguation by " + sLineId + "\nzRegex: " + zRegex.source);
                                                oEvalFunc[sWhat](sText, m, this.dTokenPos);
                                                if (bDebug) {
                                                    console.log("= " + m[0] + "  # " + sLineId, "\nDA:", this.dTokenPos);
                                                }
                                                break;
                                            case ">":
                                                // we do nothing, this test is just a condition to apply all following actions
                                                break;
                                            default:
                                                console.log("# error: unknown action at " + sLineId);
                                        }
                                    } else {
                                        if (cActionType == ">") {
                                            break;
                                        }
                                    }
                                }
                                catch (e) {
                                    console.log(sText);
                                    console.log("# line id: " + sLineId + "\n# rule id: " + sRuleId);
                                    console.error(e);
                                }
                            }
                        }
                    }
                }
            }
        }
        if (bChange) {
            if (bParagraph) {
                this.sText = sText;
            } else {
                this.sSentence = sText;
            }
        }
    }

    update (sSentence, bDebug=false) {
        // update <sSentence> and retokenize
        this.sSentence = sSentence;
        let lNewToken = Array.from(_oTokenizer.genTokens(sSentence, true));
        for (let dToken of lNewToken) {
            if (this.dTokenPos.gl_get(dToken["nStart"], {}).hasOwnProperty("lMorph")) {
                dToken["lMorph"] = this.dTokenPos.get(dToken["nStart"])["lMorph"];
            }
            if (this.dTokenPos.gl_get(dToken["nStart"], {}).hasOwnProperty("aTags")) {
                dToken["aTags"] = this.dTokenPos.get(dToken["nStart"])["aTags"];
            }
        }
        this.lToken = lNewToken;
        this.dTokenPos.clear();
        for (let dToken of this.lToken) {
            if (dToken["sType"] != "INFO") {
                this.dTokenPos.set(dToken["nStart"], dToken);
            }
        }
        if (bDebug) {
            console.log("UPDATE:");
            console.log(this.asString());
        }
    }

    * _getNextPointers (dToken, dGraph, dPointer, bDebug=false) {
        // generator: return nodes where <dToken> “values” match <dNode> arcs
        try {
            let dNode = dPointer["dNode"];
            let iNode1 = dPointer["iNode1"];
            let bTokenFound = false;
            // token value
            if (dNode.hasOwnProperty(dToken["sValue"])) {
                if (bDebug) {
                    console.log("  MATCH: " + dToken["sValue"]);
                }
                yield { "iNode1": iNode1, "dNode": dGraph[dNode[dToken["sValue"]]] };
                bTokenFound = true;
            }
            if (dToken["sValue"].slice(0,2).gl_isTitle()) { // we test only 2 first chars, to make valid words such as "Laissez-les", "Passe-partout".
                let sValue = dToken["sValue"].toLowerCase();
                if (dNode.hasOwnProperty(sValue)) {
                    if (bDebug) {
                        console.log("  MATCH: " + sValue);
                    }
                    yield { "iNode1": iNode1, "dNode": dGraph[dNode[sValue]] };
                    bTokenFound = true;
                }
            }
            else if (dToken["sValue"].gl_isUpperCase()) {
                let sValue = dToken["sValue"].toLowerCase();
                if (dNode.hasOwnProperty(sValue)) {
                    if (bDebug) {
                        console.log("  MATCH: " + sValue);
                    }
                    yield { "iNode1": iNode1, "dNode": dGraph[dNode[sValue]] };
                    bTokenFound = true;
                }
                sValue = dToken["sValue"].gl_toCapitalize();
                if (dNode.hasOwnProperty(sValue)) {
                    if (bDebug) {
                        console.log("  MATCH: " + sValue);
                    }
                    yield { "iNode1": iNode1, "dNode": dGraph[dNode[sValue]] };
                    bTokenFound = true;
                }
            }
            // regex value arcs
            if (dToken["sType"] != "INFO"  &&  dToken["sType"] != "PUNC"  &&  dToken["sType"] != "SIGN") {
                if (dNode.hasOwnProperty("<re_value>")) {
                    for (let sRegex in dNode["<re_value>"]) {
                        if (!sRegex.includes("¬")) {
                            // no anti-pattern
                            if (dToken["sValue"].search(sRegex) !== -1) {
                                if (bDebug) {
                                    console.log("  MATCH: ~" + sRegex);
                                }
                                yield { "iNode1": iNode1, "dNode": dGraph[dNode["<re_value>"][sRegex]] };
                                bTokenFound = true;
                            }
                        } else {
                            // there is an anti-pattern
                            let [sPattern, sNegPattern] = sRegex.split("¬", 2);
                            if (sNegPattern && dToken["sValue"].search(sNegPattern) !== -1) {
                                continue;
                            }
                            if (!sPattern || dToken["sValue"].search(sPattern) !== -1) {
                                if (bDebug) {
                                    console.log("  MATCH: ~" + sRegex);
                                }
                                yield { "iNode1": iNode1, "dNode": dGraph[dNode["<re_value>"][sRegex]] };
                                bTokenFound = true;
                            }
                        }
                    }
                }
            }
            // analysable tokens
            if (dToken["sType"].slice(0,4) == "WORD") {
                // token lemmas
                if (dNode.hasOwnProperty("<lemmas>")) {
                    for (let sLemma of _oSpellChecker.getLemma(dToken["sValue"])) {
                        if (dNode["<lemmas>"].hasOwnProperty(sLemma)) {
                            if (bDebug) {
                                console.log("  MATCH: >" + sLemma);
                            }
                            yield { "iNode1": iNode1, "dNode": dGraph[dNode["<lemmas>"][sLemma]] };
                            bTokenFound = true;
                        }
                    }
                }
                // regex morph arcs
                if (dNode.hasOwnProperty("<re_morph>")) {
                    let lMorph = (dToken.hasOwnProperty("lMorph")) ? dToken["lMorph"] : _oSpellChecker.getMorph(dToken["sValue"]);
                    for (let sRegex in dNode["<re_morph>"]) {
                        if (!sRegex.includes("¬")) {
                            // no anti-pattern
                            if (lMorph.some(sMorph  =>  (sMorph.search(sRegex) !== -1))) {
                                if (bDebug) {
                                    console.log("  MATCH: @" + sRegex);
                                }
                                yield { "iNode1": iNode1, "dNode": dGraph[dNode["<re_morph>"][sRegex]] };
                                bTokenFound = true;
                            }
                        } else {
                            // there is an anti-pattern
                            let [sPattern, sNegPattern] = sRegex.split("¬", 2);
                            if (sNegPattern == "*") {
                                // all morphologies must match with <sPattern>
                                if (sPattern) {
                                    if (lMorph.length > 0  &&  lMorph.every(sMorph  =>  (sMorph.search(sPattern) !== -1))) {
                                        if (bDebug) {
                                            console.log("  MATCH: @" + sRegex);
                                        }
                                        yield { "iNode1": iNode1, "dNode": dGraph[dNode["<re_morph>"][sRegex]] };
                                        bTokenFound = true;
                                    }
                                }
                            } else {
                                if (sNegPattern  &&  lMorph.some(sMorph  =>  (sMorph.search(sNegPattern) !== -1))) {
                                    continue;
                                }
                                if (!sPattern  ||  lMorph.some(sMorph  =>  (sMorph.search(sPattern) !== -1))) {
                                    if (bDebug) {
                                        console.log("  MATCH: @" + sRegex);
                                    }
                                    yield { "iNode1": iNode1, "dNode": dGraph[dNode["<re_morph>"][sRegex]] };
                                    bTokenFound = true;
                                }
                            }
                        }
                    }
                }
            }
            // token tags
            if (dToken.hasOwnProperty("aTags") && dNode.hasOwnProperty("<tags>")) {
                for (let sTag of dToken["aTags"]) {
                    if (dNode["<tags>"].hasOwnProperty(sTag)) {
                        if (bDebug) {
                            console.log("  MATCH: /" + sTag);
                        }
                        yield { "iNode1": iNode1, "dNode": dGraph[dNode["<tags>"][sTag]] };
                        bTokenFound = true;
                    }
                }
            }
            // meta arc (for token type)
            if (dNode.hasOwnProperty("<meta>")) {
                for (let sMeta in dNode["<meta>"]) {
                    // no regex here, we just search if <dNode["sType"]> exists within <sMeta>
                    if (sMeta == "*" || dToken["sType"] == sMeta) {
                        if (bDebug) {
                            console.log("  MATCH: *" + sMeta);
                        }
                        yield { "iNode1": iNode1, "dNode": dGraph[dNode["<meta>"][sMeta]] };
                        bTokenFound = true;
                    }
                    else if (sMeta.includes("¬")) {
                        if (!sMeta.includes(dToken["sType"])) {
                            if (bDebug) {
                                console.log("  MATCH: *" + sMeta);
                            }
                            yield { "iNode1": iNode1, "dNode": dGraph[dNode["<meta>"][sMeta]] };
                            bTokenFound = true;
                        }
                    }
                }
            }
            if (!bTokenFound  &&  dPointer.hasOwnProperty("bKeep")) {
                yield dPointer;
            }
            // JUMP
            // Warning! Recurssion!
            if (dNode.hasOwnProperty("<>")) {
                let dPointer2 = { "iNode1": iNode1, "dNode": dGraph[dNode["<>"]], "bKeep": true };
                yield* this._getNextPointers(dToken, dGraph, dPointer2, bDebug);
            }
        }
        catch (e) {
            console.error(e);
        }
    }

    parseGraph (dGraph, sCountry="${country_default}", dOptions=null, bShowRuleId=false, bDebug=false, bContext=false) {
        // parse graph with tokens from the text and execute actions encountered
        let lPointer = [];
        let bTagAndRewrite = false;
        try {
            for (let [iToken, dToken] of this.lToken.entries()) {
                if (bDebug) {
                    console.log("TOKEN: " + dToken["sValue"]);
                }
                // check arcs for each existing pointer
                let lNextPointer = [];
                for (let dPointer of lPointer) {
                    lNextPointer.push(...this._getNextPointers(dToken, dGraph, dPointer, bDebug));
                }
                lPointer = lNextPointer;
                // check arcs of first nodes
                lPointer.push(...this._getNextPointers(dToken, dGraph, { "iNode1": iToken, "dNode": dGraph[0] }, bDebug));
                // check if there is rules to check for each pointer
                for (let dPointer of lPointer) {
                    if (dPointer["dNode"].hasOwnProperty("<rules>")) {
                        let bChange = this._executeActions(dGraph, dPointer["dNode"]["<rules>"], dPointer["iNode1"]-1, iToken, dOptions, sCountry, bShowRuleId, bDebug, bContext);
                        if (bChange) {
                            bTagAndRewrite = true;
                        }
                    }
                }
            }
        } catch (e) {
            console.error(e);
        }
        if (bTagAndRewrite) {
            this.rewriteFromTags(bDebug);
        }
        if (bDebug) {
            console.log(this.asString());
        }
        return this.sSentence;
    }

    _executeActions (dGraph, dNode, nTokenOffset, nLastToken, dOptions, sCountry, bShowRuleId, bDebug, bContext) {
        // execute actions found in the DARG
        let bChange = false;
        for (let [sLineId, nextNodeKey] of Object.entries(dNode)) {
            let bCondMemo = null;
            for (let sRuleId of dGraph[nextNodeKey]) {
                try {
                    if (bDebug) {
                        console.log("   >TRY: " + sRuleId + " " + sLineId);
                    }
                    let [sOption, sFuncCond, cActionType, sWhat, ...eAct] = gc_rules_graph.dRule[sRuleId];
                    // Suggestion    [ option, condition, "-", replacement/suggestion/action, iTokenStart, iTokenEnd, cStartLimit, cEndLimit, bCaseSvty, nPriority, sMessage, sURL ]
                    // TextProcessor [ option, condition, "~", replacement/suggestion/action, iTokenStart, iTokenEnd, bCaseSvty ]
                    // Disambiguator [ option, condition, "=", replacement/suggestion/action ]
                    // Tag           [ option, condition, "/", replacement/suggestion/action, iTokenStart, iTokenEnd ]
                    // Immunity      [ option, condition, "%", "",                            iTokenStart, iTokenEnd ]
                    // Test          [ option, condition, ">", "" ]
                    if (!sOption || dOptions.gl_get(sOption, false)) {
                        bCondMemo = !sFuncCond || oEvalFunc[sFuncCond](this.lToken, nTokenOffset, nLastToken, sCountry, bCondMemo, this.dTags, this.sSentence, this.sSentence0);
                        if (bCondMemo) {
                            if (cActionType == "-") {
                                // grammar error
                                let [iTokenStart, iTokenEnd, cStartLimit, cEndLimit, bCaseSvty, nPriority, sMessage, sURL] = eAct;
                                let nTokenErrorStart = (iTokenStart > 0) ? nTokenOffset + iTokenStart : nLastToken + iTokenStart;
                                if (!this.lToken[nTokenErrorStart].hasOwnProperty("bImmune")) {
                                    let nTokenErrorEnd = (iTokenEnd > 0) ? nTokenOffset + iTokenEnd : nLastToken + iTokenEnd;
                                    let nErrorStart = this.nOffsetWithinParagraph + ((cStartLimit == "<") ? this.lToken[nTokenErrorStart]["nStart"] : this.lToken[nTokenErrorStart]["nEnd"]);
                                    let nErrorEnd = this.nOffsetWithinParagraph + ((cEndLimit == ">") ? this.lToken[nTokenErrorEnd]["nEnd"] : this.lToken[nTokenErrorEnd]["nStart"]);
                                    if (!this.dError.has(nErrorStart) || nPriority > this.dErrorPriority.gl_get(nErrorStart, -1)) {
                                        this.dError.set(nErrorStart, this._createErrorFromTokens(sWhat, nTokenOffset, nLastToken, nTokenErrorStart, nErrorStart, nErrorEnd, sLineId, sRuleId, bCaseSvty, sMessage, sURL, bShowRuleId, sOption, bContext));
                                        this.dErrorPriority.set(nErrorStart, nPriority);
                                        if (bDebug) {
                                            console.log("    NEW_ERROR: ",  this.dError.get(nErrorStart));
                                        }
                                    }
                                }
                            }
                            else if (cActionType == "~") {
                                // text processor
                                let nTokenStart = (eAct[0] > 0) ? nTokenOffset + eAct[0] : nLastToken + eAct[0];
                                let nTokenEnd = (eAct[1] > 0) ? nTokenOffset + eAct[1] : nLastToken + eAct[1];
                                this._tagAndPrepareTokenForRewriting(sWhat, nTokenStart, nTokenEnd, nTokenOffset, nLastToken, eAct[2], bDebug);
                                bChange = true;
                                if (bDebug) {
                                    console.log(`    TEXT_PROCESSOR: [${this.lToken[nTokenStart]["sValue"]}:${this.lToken[nTokenEnd]["sValue"]}]  > ${sWhat}`);
                                }
                            }
                            else if (cActionType == "=") {
                                // disambiguation
                                oEvalFunc[sWhat](this.lToken, nTokenOffset, nLastToken);
                                if (bDebug) {
                                    console.log(`    DISAMBIGUATOR: (${sWhat})  [${this.lToken[nTokenOffset+1]["sValue"]}:${this.lToken[nLastToken]["sValue"]}]`);
                                }
                            }
                            else if (cActionType == ">") {
                                // we do nothing, this test is just a condition to apply all following actions
                                if (bDebug) {
                                    console.log("    COND_OK");
                                }
                            }
                            else if (cActionType == "/") {
                                // Tag
                                let nTokenStart = (eAct[0] > 0) ? nTokenOffset + eAct[0] : nLastToken + eAct[0];
                                let nTokenEnd = (eAct[1] > 0) ? nTokenOffset + eAct[1] : nLastToken + eAct[1];
                                for (let i = nTokenStart; i <= nTokenEnd; i++) {
                                    if (this.lToken[i].hasOwnProperty("aTags")) {
                                        this.lToken[i]["aTags"].add(...sWhat.split("|"))
                                    } else {
                                        this.lToken[i]["aTags"] = new Set(sWhat.split("|"));
                                    }
                                }
                                if (bDebug) {
                                    console.log(`    TAG:  ${sWhat} > [${this.lToken[nTokenStart]["sValue"]}:${this.lToken[nTokenEnd]["sValue"]}]`);
                                }
                                if (!this.dTags.has(sWhat)) {
                                    this.dTags.set(sWhat, [nTokenStart, nTokenStart]);
                                } else {
                                    this.dTags.set(sWhat, [Math.min(nTokenStart, this.dTags.get(sWhat)[0]), Math.max(nTokenEnd, this.dTags.get(sWhat)[1])]);
                                }
                            }
                            else if (cActionType == "%") {
                                // immunity
                                if (bDebug) {
                                    console.log("    IMMUNITY: " + _rules_graph.dRule[sRuleId]);
                                }
                                let nTokenStart = (eAct[0] > 0) ? nTokenOffset + eAct[0] : nLastToken + eAct[0];
                                let nTokenEnd = (eAct[1] > 0) ? nTokenOffset + eAct[1] : nLastToken + eAct[1];
                                if (nTokenEnd - nTokenStart == 0) {
                                    this.lToken[nTokenStart]["bImmune"] = true;
                                    let nErrorStart = this.nOffsetWithinParagraph + this.lToken[nTokenStart]["nStart"];
                                    if (this.dError.has(nErrorStart)) {
                                        this.dError.delete(nErrorStart);
                                    }
                                } else {
                                    for (let i = nTokenStart;  i <= nTokenEnd;  i++) {
                                        this.lToken[i]["bImmune"] = true;
                                        let nErrorStart = this.nOffsetWithinParagraph + this.lToken[i]["nStart"];
                                        if (this.dError.has(nErrorStart)) {
                                            this.dError.delete(nErrorStart);
                                        }
                                    }
                                }
                            } else {
                                console.log("# error: unknown action at " + sLineId);
                            }
                        }
                        else if (cActionType == ">") {
                            if (bDebug) {
                                console.log("    COND_BREAK");
                            }
                            break;
                        }
                    }
                }
                catch (e) {
                    console.log("Error: ", sLineId, sRuleId, this.sSentence);
                    console.error(e);
                }
            }
        }
        return bChange;
    }

    _createErrorFromRegex (sText, sText0, sSugg, nOffset, m, iGroup, sLineId, sRuleId, bUppercase, sMsg, sURL, bShowRuleId, sOption, bContext) {
        let nStart = nOffset + m.start[iGroup];
        let nEnd = nOffset + m.end[iGroup];
        // suggestions
        let lSugg = [];
        if (sSugg.startsWith("=")) {
            sSugg = oEvalFunc[sSugg.slice(1)](sText, m);
            lSugg = (sSugg) ? sSugg.split("|") : [];
        } else if (sSugg == "_") {
            lSugg = [];
        } else {
            lSugg = sSugg.gl_expand(m).split("|");
        }
        if (bUppercase && lSugg.length > 0 && m[iGroup].slice(0,1).gl_isUpperCase()) {
            lSugg = capitalizeArray(lSugg);
        }
        // Message
        let sMessage = (sMsg.startsWith("=")) ? oEvalFunc[sMsg.slice(1)](sText, m) : sMsg.gl_expand(m);
        if (bShowRuleId) {
            sMessage += "  ## " + sLineId + " # " + sRuleId;
        }
        //
        return this._createError(nStart, nEnd, sLineId, sRuleId, sOption, sMessage, lSugg, sURL, bContext);
    }

    _createErrorFromTokens (sSugg, nTokenOffset, nLastToken, iFirstToken, nStart, nEnd, sLineId, sRuleId, bCaseSvty, sMsg, sURL, bShowRuleId, sOption, bContext) {
        // suggestions
        let lSugg = [];
        if (sSugg.startsWith("=")) {
            sSugg = oEvalFunc[sSugg.slice(1)](this.lToken, nTokenOffset, nLastToken);
            lSugg = (sSugg) ? sSugg.split("|") : [];
        } else if (sSugg == "_") {
            lSugg = [];
        } else {
            lSugg = this._expand(sSugg, nTokenOffset, nLastToken).split("|");
        }
        if (bCaseSvty && lSugg.length > 0 && this.lToken[iFirstToken]["sValue"].slice(0,1).gl_isUpperCase()) {
            lSugg = capitalizeArray(lSugg);
        }
        // Message
        let sMessage = (sMsg.startsWith("=")) ? oEvalFunc[sMsg.slice(1)](this.lToken, nTokenOffset, nLastToken) : this._expand(sMsg, nTokenOffset, nLastToken);
        if (bShowRuleId) {
            sMessage += " ## " + sLineId + " # " + sRuleId;
        }
        //
        return this._createError(nStart, nEnd, sLineId, sRuleId, sOption, sMessage, lSugg, sURL, bContext);
    }

    _createError (nStart, nEnd, sLineId, sRuleId, sOption, sMessage, lSugg, sURL, bContext) {
        let oErr = {
            "nStart": nStart,
            "nEnd": nEnd,
            "sLineId": sLineId,
            "sRuleId": sRuleId,
            "sType": sOption || "notype",
            "sMessage": sMessage,
            "aSuggestions": lSugg,
            "URL": sURL
        }
        if (bContext) {
            oErr['sUnderlined'] = this.sText0.slice(nStart, nEnd);
            oErr['sBefore'] = this.sText0.slice(Math.max(0,nStart-80), nStart);
            oErr['sAfter'] = this.sText0.slice(nEnd, nEnd+80);
        }
        return oErr;
    }

    _expand (sText, nTokenOffset, nLastToken) {
        let m;
        while ((m = /\\(-?[0-9]+)/.exec(sText)) !== null) {
            if (m[1].slice(0,1) == "-") {
                sText = sText.replace(m[0], this.lToken[nLastToken+parseInt(m[1],10)+1]["sValue"]);
            } else {
                sText = sText.replace(m[0], this.lToken[nTokenOffset+parseInt(m[1],10)]["sValue"]);
            }
        }
        return sText;
    }

    rewriteText (sText, sRepl, iGroup, m, bUppercase) {
        // text processor: write sRepl in sText at iGroup position"
        let ln = m.end[iGroup] - m.start[iGroup];
        let sNew = "";
        if (sRepl === "*") {
            sNew = " ".repeat(ln);
        }
        else if (sRepl === ">" || sRepl === "_" || sRepl === "~") {
            sNew = sRepl + " ".repeat(ln-1);
        }
        else if (sRepl === "@") {
            sNew = "@".repeat(ln);
        }
        else if (sRepl.slice(0,1) === "=") {
            sNew = oEvalFunc[sRepl.slice(1)](sText, m);
            sNew = sNew + " ".repeat(ln-sNew.length);
            if (bUppercase && m[iGroup].slice(0,1).gl_isUpperCase()) {
                sNew = sNew.gl_toCapitalize();
            }
        } else {
            sNew = sRepl.gl_expand(m);
            sNew = sNew + " ".repeat(ln-sNew.length);
        }
        //console.log(sText+"\nstart: "+m.start[iGroup]+" end:"+m.end[iGroup]);
        return sText.slice(0, m.start[iGroup]) + sNew + sText.slice(m.end[iGroup]);
    }

    _tagAndPrepareTokenForRewriting (sWhat, nTokenRewriteStart, nTokenRewriteEnd, nTokenOffset, nLastToken, bCaseSvty, bDebug) {
        // text processor: rewrite tokens between <nTokenRewriteStart> and <nTokenRewriteEnd> position
        if (sWhat === "*") {
            // purge text
            if (nTokenRewriteEnd - nTokenRewriteStart == 0) {
                this.lToken[nTokenRewriteStart]["bToRemove"] = true;
            } else {
                for (let i = nTokenRewriteStart;  i <= nTokenRewriteEnd;  i++) {
                    this.lToken[i]["bToRemove"] = true;
                }
            }
        }
        else if (sWhat === "␣") {
            // merge tokens
            this.lToken[nTokenRewriteStart]["nMergeUntil"] = nTokenRewriteEnd;
        }
        else if (sWhat === "_") {
            // neutralized token
            if (nTokenRewriteEnd - nTokenRewriteStart == 0) {
                this.lToken[nTokenRewriteStart]["sNewValue"] = "_";
            } else {
                for (let i = nTokenRewriteStart;  i <= nTokenRewriteEnd;  i++) {
                    this.lToken[i]["sNewValue"] = "_";
                }
            }
        }
        else {
            if (sWhat.startsWith("=")) {
                sWhat = oEvalFunc[sWhat.slice(1)](this.lToken, nTokenOffset, nLastToken);
            } else {
                sWhat = this._expand(sWhat, nTokenOffset, nLastToken);
            }
            let bUppercase = bCaseSvty && this.lToken[nTokenRewriteStart]["sValue"].slice(0,1).gl_isUpperCase();
            if (nTokenRewriteEnd - nTokenRewriteStart == 0) {
                // one token
                if (bUppercase) {
                    sWhat = sWhat.gl_toCapitalize();
                }
                this.lToken[nTokenRewriteStart]["sNewValue"] = sWhat;
            }
            else {
                // several tokens
                let lTokenValue = sWhat.split("|");
                if (lTokenValue.length != (nTokenRewriteEnd - nTokenRewriteStart + 1)) {
                    console.log("Error. Text processor: number of replacements != number of tokens.");
                    return;
                }
                let j = 0;
                for (let i = nTokenRewriteStart;  i <= nTokenRewriteEnd;  i++) {
                    let sValue = lTokenValue[j];
                    if (!sValue || sValue === "*") {
                        this.lToken[i]["bToRemove"] = true;
                    } else {
                        if (bUppercase) {
                            sValue = sValue.gl_toCapitalize();
                        }
                        this.lToken[i]["sNewValue"] = sValue;
                    }
                    j++;
                }
            }
        }
    }

    rewriteFromTags (bDebug=false) {
        // rewrite the sentence, modify tokens, purge the token list
        if (bDebug) {
            console.log("REWRITE");
        }
        let lNewToken = [];
        let nMergeUntil = 0;
        let dTokenMerger = null;
        for (let [iToken, dToken] of this.lToken.entries()) {
            let bKeepToken = true;
            if (dToken["sType"] != "INFO") {
                if (nMergeUntil && iToken <= nMergeUntil) {
                    dTokenMerger["sValue"] += " ".repeat(dToken["nStart"] - dTokenMerger["nEnd"]) + dToken["sValue"];
                    dTokenMerger["nEnd"] = dToken["nEnd"];
                    if (bDebug) {
                        console.log("  MERGED TOKEN: " + dTokenMerger["sValue"]);
                    }
                    bKeepToken = false;
                }
                if (dToken.hasOwnProperty("nMergeUntil")) {
                    if (iToken > nMergeUntil) { // this token is not already merged with a previous token
                        dTokenMerger = dToken;
                    }
                    if (dToken["nMergeUntil"] > nMergeUntil) {
                        nMergeUntil = dToken["nMergeUntil"];
                    }
                    delete dToken["nMergeUntil"];
                }
                else if (dToken.hasOwnProperty("bToRemove")) {
                    if (bDebug) {
                        console.log("  REMOVED: " + dToken["sValue"]);
                    }
                    this.sSentence = this.sSentence.slice(0, dToken["nStart"]) + " ".repeat(dToken["nEnd"] - dToken["nStart"]) + this.sSentence.slice(dToken["nEnd"]);
                    bKeepToken = false;
                }
            }
            //
            if (bKeepToken) {
                lNewToken.push(dToken);
                if (dToken.hasOwnProperty("sNewValue")) {
                    // rewrite token and sentence
                    if (bDebug) {
                        console.log(dToken["sValue"] + " -> " + dToken["sNewValue"]);
                    }
                    dToken["sRealValue"] = dToken["sValue"];
                    dToken["sValue"] = dToken["sNewValue"];
                    let nDiffLen = dToken["sRealValue"].length - dToken["sNewValue"].length;
                    let sNewRepl = (nDiffLen >= 0) ? dToken["sNewValue"] + " ".repeat(nDiffLen) : dToken["sNewValue"].slice(0, dToken["sRealValue"].length);
                    this.sSentence = this.sSentence.slice(0,dToken["nStart"]) + sNewRepl + this.sSentence.slice(dToken["nEnd"]);
                    delete dToken["sNewValue"];
                }
            }
            else {
                try {
                    this.dTokenPos.delete(dToken["nStart"]);
                }
                catch (e) {
                    console.log(this.asString());
                    console.log(dToken);
                }
            }
        }
        if (bDebug) {
            console.log("  TEXT REWRITED: " + this.sSentence);
        }
        this.lToken.length = 0;
        this.lToken = lNewToken;
    }
};


//////// Common functions

function option (sOpt) {
    // return true if option sOpt is active
    return _dOptions.get(sOpt);
}

var re = {
    search: function (sRegex, sText) {
        if (sRegex.startsWith("(?i)")) {
            return sText.search(new RegExp(sRegex.slice(4), "i")) !== -1;
        } else {
            return sText.search(sRegex) !== -1;
        }
    },

    createRegExp: function (sRegex) {
        if (sRegex.startsWith("(?i)")) {
            return new RegExp(sRegex.slice(4), "i");
        } else {
            return new RegExp(sRegex);
        }
    }
}


//////// functions to get text outside pattern scope

// warning: check compile_rules.py to understand how it works

function nextword (s, iStart, n) {
    // get the nth word of the input string or empty string
    let z = new RegExp("^(?: +[a-zà-öA-Zø-ÿÀ-Ö0-9Ø-ßĀ-ʯfi-stᴀ-ᶿ%_-]+){" + (n-1).toString() + "} +([a-zà-öA-Zø-ÿÀ-Ö0-9Ø-ßĀ-ʯfi-stᴀ-ᶿ%_-]+)", "ig");
    let m = z.exec(s.slice(iStart));
    if (!m) {
        return null;
    }
    return [iStart + z.lastIndex - m[1].length, m[1]];
}

function prevword (s, iEnd, n) {
    // get the (-)nth word of the input string or empty string
    let z = new RegExp("([a-zà-öA-Zø-ÿÀ-Ö0-9Ø-ßĀ-ʯfi-stᴀ-ᶿ%_-]+) +(?:[a-zà-öA-Zø-ÿÀ-Ö0-9Ø-ßĀ-ʯfi-stᴀ-ᶿ%_-]+ +){" + (n-1).toString() + "}$", "i");
    let m = z.exec(s.slice(0, iEnd));
    if (!m) {
        return null;
    }
    return [m.index, m[1]];
}

function nextword1 (s, iStart) {
    // get next word (optimization)
    let _zNextWord = new RegExp ("^ +([a-zà-öA-Zø-ÿÀ-Ö0-9Ø-ßĀ-ʯfi-stᴀ-ᶿ_][a-zà-öA-Zø-ÿÀ-Ö0-9Ø-ßĀ-ʯfi-stᴀ-ᶿ_-]*)", "ig");
    let m = _zNextWord.exec(s.slice(iStart));
    if (!m) {
        return null;
    }
    return [iStart + _zNextWord.lastIndex - m[1].length, m[1]];
}

const _zPrevWord = new RegExp ("([a-zà-öA-Zø-ÿÀ-Ö0-9Ø-ßĀ-ʯfi-stᴀ-ᶿ_][a-zà-öA-Zø-ÿÀ-Ö0-9Ø-ßĀ-ʯfi-stᴀ-ᶿ_-]*) +$", "i");

function prevword1 (s, iEnd) {
    // get previous word (optimization)
    let m = _zPrevWord.exec(s.slice(0, iEnd));
    if (!m) {
        return null;
    }
    return [m.index, m[1]];
}

function look (s, sPattern, sNegPattern=null) {
    // seek sPattern in s (before/after/fulltext), if antipattern sNegPattern not in s
    try {
        if (sNegPattern && re.search(sNegPattern, s)) {
            return false;
        }
        return re.search(sPattern, s);
    }
    catch (e) {
        console.error(e);
    }
    return false;
}



//////// Analyse groups for regex rules


function displayInfo (dTokenPos, aWord) {
    // for debugging: info of word
    if (!aWord) {
        console.log("> nothing to find");
        return true;
    }
    let lMorph = _oSpellChecker.getMorph(aWord[1]);
    if (lMorph.length === 0) {
        console.log("> not in dictionary");
        return true;
    }
    if (dTokenPos.has(aWord[0])) {
        console.log("DA: " + dTokenPos.get(aWord[0]));
    }
    console.log("FSA: " + lMorph);
    return true;
}

function morph (dTokenPos, aWord, sPattern, sNegPattern, bNoWord=false) {
    // analyse a tuple (position, word), returns true if not sNegPattern in word morphologies and sPattern in word morphologies (disambiguation on)
    if (!aWord) {
        return bNoWord;
    }
    let lMorph = (dTokenPos.has(aWord[0])  &&  dTokenPos.get(aWord[0]))["lMorph"] ? dTokenPos.get(aWord[0])["lMorph"] : _oSpellChecker.getMorph(aWord[1]);
    if (lMorph.length === 0) {
        return false;
    }
    if (sNegPattern) {
        // check negative condition
        if (sNegPattern === "*") {
            // all morph must match sPattern
            return lMorph.every(sMorph  =>  (sMorph.search(sPattern) !== -1));
        }
        else {


            if (lMorph.some(sMorph  =>  (sMorph.search(sNegPattern) !== -1))) {
                return false;
            }
        }
    }
    // search sPattern
    return lMorph.some(sMorph  =>  (sMorph.search(sPattern) !== -1));
}

function analyse (sWord, sPattern, sNegPattern) {
    // analyse a word, returns True if not sNegPattern in word morphologies and sPattern in word morphologies (disambiguation off)
    let lMorph = _oSpellChecker.getMorph(sWord);
    if (lMorph.length === 0) {
        return false;
    }
    if (sNegPattern) {

        // check negative condition
        if (sNegPattern === "*") {
            // all morph must match sPattern
            return lMorph.every(sMorph  =>  (sMorph.search(sPattern) !== -1));
        }

        else {
            if (lMorph.some(sMorph  =>  (sMorph.search(sNegPattern) !== -1))) {
                return false;
            }


        }
    }
    // search sPattern
    return lMorph.some(sMorph  =>  (sMorph.search(sPattern) !== -1));
}


//// Analyse tokens for graph rules

function g_value (dToken, sValues, nLeft=null, nRight=null) {
    // test if <dToken['sValue']> is in sValues (each value should be separated with |)
    let sValue = (nLeft === null) ? "|"+dToken["sValue"]+"|" : "|"+dToken["sValue"].slice(nLeft, nRight)+"|";
    if (sValues.includes(sValue)) {
        return true;
    }
    if (dToken["sValue"].slice(0,2).gl_isTitle()) { // we test only 2 first chars, to make valid words such as "Laissez-les", "Passe-partout".
        if (sValues.includes(sValue.toLowerCase())) {
            return true;
        }
    }
    else if (dToken["sValue"].gl_isUpperCase()) {
        //if sValue.lower() in sValues:
        //    return true;
        sValue = "|"+sValue.slice(1).gl_toCapitalize();
        if (sValues.includes(sValue)) {
            return true;
        }
    }
    return false;
}

function g_morph (dToken, sPattern, sNegPattern="", nLeft=null, nRight=null, bMemorizeMorph=true) {
    // analyse a token, return True if <sNegPattern> not in morphologies and <sPattern> in morphologies
    let lMorph;
    if (dToken.hasOwnProperty("lMorph")) {
        lMorph = dToken["lMorph"];
    }
    else {
        if (nLeft !== null) {
            let sValue = (nRight !== null) ? dToken["sValue"].slice(nLeft, nRight) : dToken["sValue"].slice(nLeft);
            lMorph = _oSpellChecker.getMorph(sValue);
            if (bMemorizeMorph) {
                dToken["lMorph"] = lMorph;
            }
        } else {
            lMorph = _oSpellChecker.getMorph(dToken["sValue"]);
        }
    }
    if (lMorph.length == 0) {
        return false;
    }
    // check negative condition
    if (sNegPattern) {
        if (sNegPattern == "*") {
            // all morph must match sPattern
            return lMorph.every(sMorph  =>  (sMorph.search(sPattern) !== -1));
        }
        else {
            if (lMorph.some(sMorph  =>  (sMorph.search(sNegPattern) !== -1))) {
                return false;
            }
        }
    }
    // search sPattern
    return lMorph.some(sMorph  =>  (sMorph.search(sPattern) !== -1));
}

function g_analyse (dToken, sPattern, sNegPattern="", nLeft=null, nRight=null, bMemorizeMorph=true) {
    // analyse a token, return True if <sNegPattern> not in morphologies and <sPattern> in morphologies
    let lMorph;
    if (nLeft !== null) {
        let sValue = (nRight !== null) ? dToken["sValue"].slice(nLeft, nRight) : dToken["sValue"].slice(nLeft);
        lMorph = _oSpellChecker.getMorph(sValue);
        if (bMemorizeMorph) {
            dToken["lMorph"] = lMorph;
        }
    } else {
        lMorph = _oSpellChecker.getMorph(dToken["sValue"]);
    }
    if (lMorph.length == 0) {
        return false;
    }
    // check negative condition
    if (sNegPattern) {
        if (sNegPattern == "*") {
            // all morph must match sPattern
            return lMorph.every(sMorph  =>  (sMorph.search(sPattern) !== -1));
        }
        else {
            if (lMorph.some(sMorph  =>  (sMorph.search(sNegPattern) !== -1))) {
                return false;
            }
        }
    }
    // search sPattern
    return lMorph.some(sMorph  =>  (sMorph.search(sPattern) !== -1));
}

function g_merged_analyse (dToken1, dToken2, cMerger, sPattern, sNegPattern="", bSetMorph=true) {
    // merge two token values, return True if <sNegPattern> not in morphologies and <sPattern> in morphologies (disambiguation off)
    let lMorph = _oSpellChecker.getMorph(dToken1["sValue"] + cMerger + dToken2["sValue"]);
    if (lMorph.length == 0) {
        return false;
    }
    // check negative condition
    if (sNegPattern) {
        if (sNegPattern == "*") {
            // all morph must match sPattern
            let bResult = lMorph.every(sMorph  =>  (sMorph.search(sPattern) !== -1));
            if (bResult && bSetMorph) {
                dToken1["lMorph"] = lMorph;
            }
            return bResult;
        }
        else {
            if (lMorph.some(sMorph  =>  (sMorph.search(sNegPattern) !== -1))) {
                return false;
            }
        }
    }
    // search sPattern
    let bResult = lMorph.some(sMorph  =>  (sMorph.search(sPattern) !== -1));
    if (bResult && bSetMorph) {
        dToken1["lMorph"] = lMorph;
    }
    return bResult;
}

function g_tag_before (dToken, dTags, sTag) {
    if (!dTags.has(sTag)) {
        return false;
    }
    if (dToken["i"] > dTags.get(sTag)[0]) {
        return true;
    }
    return false;
}

function g_tag_after (dToken, dTags, sTag) {
    if (!dTags.has(sTag)) {
        return false;
    }
    if (dToken["i"] < dTags.get(sTag)[1]) {
        return true;
    }
    return false;
}

function g_tag (dToken, sTag) {
    return dToken.hasOwnProperty("aTags") && dToken["aTags"].has(sTag);
}

function g_space_between_tokens (dToken1, dToken2, nMin, nMax=null) {
    let nSpace = dToken2["nStart"] - dToken1["nEnd"]
    if (nSpace < nMin) {
        return false;
    }
    if (nMax !== null && nSpace > nMax) {
        return false;
    }
    return true;
}

function g_token (lToken, i) {
    if (i < 0) {
        return lToken[0];
    }
    if (i >= lToken.length) {
        return lToken[-1];
    }
    return lToken[i];
}


//////// Disambiguator

function select (dTokenPos, nPos, sWord, sPattern, lDefault=null) {
    if (!sWord) {
        return true;
    }
    if (!dTokenPos.has(nPos)) {
        console.log("Error. There should be a token at this position: ", nPos);
        return true;
    }

    let lMorph = _oSpellChecker.getMorph(sWord);


    if (lMorph.length === 0  ||  lMorph.length === 1) {
        return true;
    }
    let lSelect = lMorph.filter( sMorph => sMorph.search(sPattern) !== -1 );
    if (lSelect.length > 0) {
        if (lSelect.length != lMorph.length) {

            dTokenPos.get(nPos)["lMorph"] = lSelect;
        }
    } else if (lDefault) {

        dTokenPos.get(nPos)["lMorph"] = lDefault;
    }
    return true;
}

function exclude (dTokenPos, nPos, sWord, sPattern, lDefault=null) {
    if (!sWord) {
        return true;
    }
    if (!dTokenPos.has(nPos)) {
        console.log("Error. There should be a token at this position: ", nPos);
        return true;
    }

    let lMorph = _oSpellChecker.getMorph(sWord);
    if (lMorph.length === 0  ||  lMorph.length === 1) {
        return true;
    }

    let lSelect = lMorph.filter( sMorph => sMorph.search(sPattern) === -1 );
    if (lSelect.length > 0) {
        if (lSelect.length != lMorph.length) {
            dTokenPos.get(nPos)["lMorph"] = lSelect;
        }
    } else if (lDefault) {
        dTokenPos.get(nPos)["lMorph"] = lDefault;
    }
    return true;
}

function define (dTokenPos, nPos, lMorph) {
    dTokenPos.get(nPos)["lMorph"] = lMorph;
    return true;
}


//// Disambiguation for graph rules

function g_select (dToken, sPattern, lDefault=null) {
    // select morphologies for <dToken> according to <sPattern>, always return true
    let lMorph = (dToken.hasOwnProperty("lMorph")) ? dToken["lMorph"] : _oSpellChecker.getMorph(dToken["sValue"]);
    if (lMorph.length === 0  || lMorph.length === 1) {
        if (lDefault) {
            dToken["lMorph"] = lDefault;
        }
        return true;
    }
    let lSelect = lMorph.filter( sMorph => sMorph.search(sPattern) !== -1 );
    if (lSelect.length > 0) {
        if (lSelect.length != lMorph.length) {

            dToken["lMorph"] = lSelect;
        }
    } else if (lDefault) {
        dToken["lMorph"] = lDefault;
    }
    return true;
}

function g_exclude (dToken, sPattern, lDefault=null) {
    // select morphologies for <dToken> according to <sPattern>, always return true
    let lMorph = (dToken.hasOwnProperty("lMorph")) ? dToken["lMorph"] : _oSpellChecker.getMorph(dToken["sValue"]);
    if (lMorph.length === 0  || lMorph.length === 1) {
        if (lDefault) {
            dToken["lMorph"] = lDefault;
        }
        return true;
    }
    let lSelect = lMorph.filter( sMorph => sMorph.search(sPattern) === -1 );
    if (lSelect.length > 0) {
        if (lSelect.length != lMorph.length) {
            dToken["lMorph"] = lSelect;
        }
    } else if (lDefault) {
        dToken["lMorph"] = lDefault;
    }
    return true;
}

function g_define (dToken, lMorph) {
    // set morphologies of <dToken>, always return true
    dToken["lMorph"] = lMorph;
    return true;
}

function g_define_from (dToken, nLeft=null, nRight=null) {
    let sValue = dToken["sValue"];
    if (nLeft !== null) {
        sValue = (nRight !== null) ? sValue.slice(nLeft, nRight) : sValue.slice(nLeft);
    }
    dToken["lMorph"] = _oSpellChecker.getMorph(sValue);
    return true;
}


//////// GRAMMAR CHECKER PLUGINS

${pluginsJS}


// generated code, do not edit
const oEvalFunc = {
    // callables for regex rules
${callablesJS}

    // callables for graph rules
${graph_callablesJS}
}


if (typeof(exports) !== 'undefined') {
    exports.lang = gc_engine.lang;
    exports.locales = gc_engine.locales;
    exports.pkg = gc_engine.pkg;
    exports.name = gc_engine.name;
    exports.version = gc_engine.version;
    exports.author = gc_engine.author;
    // init
    exports.load = gc_engine.load;
    exports.getSpellChecker = gc_engine.getSpellChecker;
    // sentence
    exports._zEndOfSentence = gc_engine._zEndOfSentence;
    exports._zBeginOfParagraph = gc_engine._zBeginOfParagraph;
    exports._zEndOfParagraph = gc_engine._zEndOfParagraph;
    exports.getSentenceBoundaries = gc_engine.getSentenceBoundaries;
    // rules


    exports.ignoreRule = gc_engine.ignoreRule;
    exports.resetIgnoreRules = gc_engine.resetIgnoreRules;
    exports.reactivateRule = gc_engine.reactivateRule;
    exports.listRules = gc_engine.listRules;
    exports.getRules = gc_engine.getRules;
    // options

    exports.setOption = gc_engine.setOption;
    exports.setOptions = gc_engine.setOptions;
    exports.getOptions = gc_engine.getOptions;
    exports.getDefaultOptions = gc_engine.getDefaultOptions;
    exports.resetOptions = gc_engine.resetOptions;
    // other
    exports.TextParser = TextParser;
}

Modified gc_core/js/tests.js from [3d850428e6] to [1eda44faf6].

80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
        catch (e) {
            console.error(e);
        }

        if (bShowUntested) {
            i = 0;
            for (let [sOpt, sLineId, sRuleId] of this.gce.listRules()) {
                if (!this._aRuleTested.has(sLineId) && !/^[0-9]+[sp]$|^[pd]_/.test(sRuleId)) {
                    sUntestedRules += sRuleId + ", ";
                    i += 1;
                }
            }
            if (i > 0) {
                yield sUntestedRules + "\n[" + i.toString() + " untested rules]";
            }
        }







|
|







80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
        catch (e) {
            console.error(e);
        }

        if (bShowUntested) {
            i = 0;
            for (let [sOpt, sLineId, sRuleId] of this.gce.listRules()) {
                if (sOpt !== "@@@@" && !this._aRuleTested.has(sLineId) && !/^[0-9]+[sp]$|^[pd]_/.test(sRuleId)) {
                    sUntestedRules += sLineId + "/" + sRuleId + ", ";
                    i += 1;
                }
            }
            if (i > 0) {
                yield sUntestedRules + "\n[" + i.toString() + " untested rules]";
            }
        }

Modified gc_core/py/__init__.py from [aeadedff14] to [49f46a05ff].




1
2




from .grammar_checker import *
>
>
>


1
2
3
4
5
"""
Grammar checker
"""

from .grammar_checker import *

Modified gc_core/py/grammar_checker.py from [79ce1061e8] to [634e5c7c61].

1
2

3
4
5
6
7
8
9
10

11
12
13
14
15
16
17
..
18
19
20
21
22
23
24

25
26
27

28
29
30

31
32
33
34
35
36

37
38
39
40
41
42

43
44
45
46
47
48
49
50
51

52
53
54

55
56
57

58
59
60
61
62
63

64
65
66
67
68
69
70
71
72
73
# Grammalecte
# Main class: wrapper


import importlib
import json

from . import text


class GrammarChecker:


    def __init__ (self, sLangCode, sContext="Python"):
        self.sLangCode = sLangCode
        # Grammar checker engine
        self.gce = importlib.import_module("."+sLangCode, "grammalecte")
        self.gce.load(sContext)
        # Spell checker
................................................................................
        self.oSpellChecker = self.gce.getSpellChecker()
        # Lexicographer
        self.oLexicographer = None
        # Text formatter
        self.oTextFormatter = None

    def getGCEngine (self):

        return self.gce

    def getSpellChecker (self):

        return self.oSpellChecker

    def getTextFormatter (self):

        if self.oTextFormatter == None:
            self.tf = importlib.import_module("."+self.sLangCode+".textformatter", "grammalecte")
        self.oTextFormatter = self.tf.TextFormatter()
        return self.oTextFormatter

    def getLexicographer (self):

        if self.oLexicographer == None:
            self.lxg = importlib.import_module("."+self.sLangCode+".lexicographe", "grammalecte")
        self.oLexicographer = self.lxg.Lexicographe(self.oSpellChecker)
        return self.oLexicographer

    def displayGCOptions (self):

        self.gce.displayOptions()

    def getParagraphErrors (self, sText, dOptions=None, bContext=False, bSpellSugg=False, bDebug=False):
        "returns a tuple: (grammar errors, spelling errors)"
        aGrammErrs = self.gce.parse(sText, "FR", bDebug=bDebug, dOptions=dOptions, bContext=bContext)
        aSpellErrs = self.oSpellChecker.parseParagraph(sText, bSpellSugg)
        return aGrammErrs, aSpellErrs

    def generateText (self, sText, bEmptyIfNoErrors=False, bSpellSugg=False, nWidth=100, bDebug=False):

        pass

    def generateTextAsJSON (self, sText, bContext=False, bEmptyIfNoErrors=False, bSpellSugg=False, bReturnText=False, bDebug=False):

        pass

    def generateParagraph (self, sText, dOptions=None, bEmptyIfNoErrors=False, bSpellSugg=False, nWidth=100, bDebug=False):

        aGrammErrs, aSpellErrs = self.getParagraphErrors(sText, dOptions, False, bSpellSugg, bDebug)
        if bEmptyIfNoErrors and not aGrammErrs and not aSpellErrs:
            return ""
        return text.generateParagraph(sText, aGrammErrs, aSpellErrs, nWidth)

    def generateParagraphAsJSON (self, iIndex, sText, dOptions=None, bContext=False, bEmptyIfNoErrors=False, bSpellSugg=False, bReturnText=False, lLineSet=None, bDebug=False):

        aGrammErrs, aSpellErrs = self.getParagraphErrors(sText, dOptions, bContext, bSpellSugg, bDebug)
        aGrammErrs = list(aGrammErrs)
        if bEmptyIfNoErrors and not aGrammErrs and not aSpellErrs:
            return ""
        if lLineSet:
            aGrammErrs, aSpellErrs = text.convertToXY(aGrammErrs, aSpellErrs, lLineSet)
            return json.dumps({ "lGrammarErrors": aGrammErrs, "lSpellingErrors": aSpellErrs }, ensure_ascii=False)
        if bReturnText:
            return json.dumps({ "iParagraph": iIndex, "sText": sText, "lGrammarErrors": aGrammErrs, "lSpellingErrors": aSpellErrs }, ensure_ascii=False)
        return json.dumps({ "iParagraph": iIndex, "lGrammarErrors": aGrammErrs, "lSpellingErrors": aSpellErrs }, ensure_ascii=False)
|
|
>








>







 







>



>



>
|
|
|



>
|
|
|



>









>



>



>






>










1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
..
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
"""
Grammalecte, grammar checker
"""

import importlib
import json

from . import text


class GrammarChecker:
    "GrammarChecker: Wrapper for the grammar checker engine"

    def __init__ (self, sLangCode, sContext="Python"):
        self.sLangCode = sLangCode
        # Grammar checker engine
        self.gce = importlib.import_module("."+sLangCode, "grammalecte")
        self.gce.load(sContext)
        # Spell checker
................................................................................
        self.oSpellChecker = self.gce.getSpellChecker()
        # Lexicographer
        self.oLexicographer = None
        # Text formatter
        self.oTextFormatter = None

    def getGCEngine (self):
        "return the grammar checker object"
        return self.gce

    def getSpellChecker (self):
        "return the spell checker object"
        return self.oSpellChecker

    def getTextFormatter (self):
        "load and return the text formatter"
        if self.oTextFormatter is None:
            tf = importlib.import_module("."+self.sLangCode+".textformatter", "grammalecte")
            self.oTextFormatter = tf.TextFormatter()
        return self.oTextFormatter

    def getLexicographer (self):
        "load and return the lexicographer"
        if self.oLexicographer is None:
            lxg = importlib.import_module("."+self.sLangCode+".lexicographe", "grammalecte")
            self.oLexicographer = lxg.Lexicographe(self.oSpellChecker)
        return self.oLexicographer

    def displayGCOptions (self):
        "display the grammar checker options"
        self.gce.displayOptions()

    def getParagraphErrors (self, sText, dOptions=None, bContext=False, bSpellSugg=False, bDebug=False):
        "returns a tuple: (grammar errors, spelling errors)"
        aGrammErrs = self.gce.parse(sText, "FR", bDebug=bDebug, dOptions=dOptions, bContext=bContext)
        aSpellErrs = self.oSpellChecker.parseParagraph(sText, bSpellSugg)
        return aGrammErrs, aSpellErrs

    def generateText (self, sText, bEmptyIfNoErrors=False, bSpellSugg=False, nWidth=100, bDebug=False):
        "[todo]"
        pass

    def generateTextAsJSON (self, sText, bContext=False, bEmptyIfNoErrors=False, bSpellSugg=False, bReturnText=False, bDebug=False):
        "[todo]"
        pass

    def generateParagraph (self, sText, dOptions=None, bEmptyIfNoErrors=False, bSpellSugg=False, nWidth=100, bDebug=False):
        "parse text and return a readable text with underline errors"
        aGrammErrs, aSpellErrs = self.getParagraphErrors(sText, dOptions, False, bSpellSugg, bDebug)
        if bEmptyIfNoErrors and not aGrammErrs and not aSpellErrs:
            return ""
        return text.generateParagraph(sText, aGrammErrs, aSpellErrs, nWidth)

    def generateParagraphAsJSON (self, iIndex, sText, dOptions=None, bContext=False, bEmptyIfNoErrors=False, bSpellSugg=False, bReturnText=False, lLineSet=None, bDebug=False):
        "parse text and return errors as a JSON string"
        aGrammErrs, aSpellErrs = self.getParagraphErrors(sText, dOptions, bContext, bSpellSugg, bDebug)
        aGrammErrs = list(aGrammErrs)
        if bEmptyIfNoErrors and not aGrammErrs and not aSpellErrs:
            return ""
        if lLineSet:
            aGrammErrs, aSpellErrs = text.convertToXY(aGrammErrs, aSpellErrs, lLineSet)
            return json.dumps({ "lGrammarErrors": aGrammErrs, "lSpellingErrors": aSpellErrs }, ensure_ascii=False)
        if bReturnText:
            return json.dumps({ "iParagraph": iIndex, "sText": sText, "lGrammarErrors": aGrammErrs, "lSpellingErrors": aSpellErrs }, ensure_ascii=False)
        return json.dumps({ "iParagraph": iIndex, "lGrammarErrors": aGrammErrs, "lSpellingErrors": aSpellErrs }, ensure_ascii=False)

Modified gc_core/py/lang_core/gc_engine.py from [72ecd7c680] to [b63b69316e].


1
2

3
4
5
6
7
8
9
10
11

12
13
14










15
16
17
18
19
20
21
..
24
25
26
27
28
29
30

31

32
33

34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255

256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290

291
292
293

294
295
296
297


298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336

337
338


339
340
341
342
343
344
345
...
347
348
349
350
351
352
353

354

355

356
357

358
359
360
361
362
363
364
365
366
367



368
369



370
371
372
373
374
375









376
377
378








379
380









381
382














383
384
385
386
387
388
389






390
391







392
393
394
395
396
397
398
399
400
401


402
403
404

405
406
407
408
409

















































410
411







































412
413






































414






415
416





417
418
419




420














421












































422
423
424
425
426




























427
428
429


430
431
432

433
434
435











































436



























437
438
439
440


441
442
























































































443
444
















445
446
447

448




449
450
451
452







453
454
455
456





























457
458
459
460
461










462
463
464
465




466
467





























































468
469







470



























471
































472
473
474
475



476
477
478
479
480
481
482
...
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526






























527
528
529











530
531



























































































































































532
533
534

535
536
537

538
539


540
541
542
543
544
545
546
547
548
549
550
551
552
553
554

555
556
557

558
559


560
561






562





















563
564
565
566
567
568









569
570


571








572
573
574


575
576

577
578









579
580
581
582
583
584


585






# Grammalecte
# Grammar checker engine


import re
import sys
import os
import traceback
#import unicodedata
from itertools import chain

from ..graphspell.spellchecker import SpellChecker

from ..graphspell.echo import echo
from . import gc_options












__all__ = [ "lang", "locales", "pkg", "name", "version", "author", \
            "load", "parse", "getSpellChecker", \
            "setOption", "setOptions", "getOptions", "getDefaultOptions", "getOptionsLabels", "resetOptions", "displayOptions", \
            "ignoreRule", "resetIgnoreRules", "reactivateRule", "listRules", "displayRules" ]

__version__ = "${version}"
................................................................................
lang = "${lang}"
locales = ${loc}
pkg = "${implname}"
name = "${name}"
version = "${version}"
author = "${author}"


_rules = None                               # module gc_rules


# data

_sAppContext = ""                           # what software is running
_dOptions = None
_aIgnoredRules = set()
_oSpellChecker = None
_dAnalyses = {}                             # cache for data from dictionary



#### Parsing

def parse (sText, sCountry="${country_default}", bDebug=False, dOptions=None, bContext=False):
    "analyses the paragraph sText and returns list of errors"
    #sText = unicodedata.normalize("NFC", sText)
    aErrors = None
    sAlt = sText
    dDA = {}        # Disambiguisator. Key = position; value = list of morphologies
    dPriority = {}  # Key = position; value = priority
    dOpt = _dOptions  if not dOptions  else dOptions

    # parse paragraph
    try:
        sNew, aErrors = _proofread(sText, sAlt, 0, True, dDA, dPriority, sCountry, dOpt, bDebug, bContext)
        if sNew:
            sText = sNew
    except:
        raise

    # cleanup
    if " " in sText:
        sText = sText.replace(" ", ' ') # nbsp
    if " " in sText:
        sText = sText.replace(" ", ' ') # nnbsp
    if "'" in sText:
        sText = sText.replace("'", "’")
    if "‑" in sText:
        sText = sText.replace("‑", "-") # nobreakdash

    # parse sentences
    for iStart, iEnd in _getSentenceBoundaries(sText):
        if 4 < (iEnd - iStart) < 2000:
            dDA.clear()
            try:
                _, errs = _proofread(sText[iStart:iEnd], sAlt[iStart:iEnd], iStart, False, dDA, dPriority, sCountry, dOpt, bDebug, bContext)
                aErrors.update(errs)
            except:
                raise
    return aErrors.values() # this is a view (iterable)


def _getSentenceBoundaries (sText):
    iStart = _zBeginOfParagraph.match(sText).end()
    for m in _zEndOfSentence.finditer(sText):
        yield (iStart, m.end())
        iStart = m.end()


def _proofread (s, sx, nOffset, bParagraph, dDA, dPriority, sCountry, dOptions, bDebug, bContext):
    dErrs = {}
    bChange = False
    bIdRule = option('idrule')

    for sOption, lRuleGroup in _getRules(bParagraph):
        if not sOption or dOptions.get(sOption, False):
            for zRegex, bUppercase, sLineId, sRuleId, nPriority, lActions in lRuleGroup:
                if sRuleId not in _aIgnoredRules:
                    for m in zRegex.finditer(s):
                        bCondMemo = None
                        for sFuncCond, cActionType, sWhat, *eAct in lActions:
                            # action in lActions: [ condition, action type, replacement/suggestion/action[, iGroup[, message, URL]] ]
                            try:
                                bCondMemo = not sFuncCond or globals()[sFuncCond](s, sx, m, dDA, sCountry, bCondMemo)
                                if bCondMemo:
                                    if cActionType == "-":
                                        # grammar error
                                        nErrorStart = nOffset + m.start(eAct[0])
                                        if nErrorStart not in dErrs or nPriority > dPriority[nErrorStart]:
                                            dErrs[nErrorStart] = _createError(s, sx, sWhat, nOffset, m, eAct[0], sLineId, sRuleId, bUppercase, eAct[1], eAct[2], bIdRule, sOption, bContext)
                                            dPriority[nErrorStart] = nPriority
                                    elif cActionType == "~":
                                        # text processor
                                        s = _rewrite(s, sWhat, eAct[0], m, bUppercase)
                                        bChange = True
                                        if bDebug:
                                            echo("~ " + s + "  -- " + m.group(eAct[0]) + "  # " + sLineId)
                                    elif cActionType == "=":
                                        # disambiguation
                                        globals()[sWhat](s, m, dDA)
                                        if bDebug:
                                            echo("= " + m.group(0) + "  # " + sLineId + "\nDA: " + str(dDA))
                                    elif cActionType == ">":
                                        # we do nothing, this test is just a condition to apply all following actions
                                        pass
                                    else:
                                        echo("# error: unknown action at " + sLineId)
                                elif cActionType == ">":
                                    break
                            except Exception as e:
                                raise Exception(str(e), "# " + sLineId + " # " + sRuleId)
    if bChange:
        return (s, dErrs)
    return (False, dErrs)


def _createWriterError (s, sx, sRepl, nOffset, m, iGroup, sLineId, sRuleId, bUppercase, sMsg, sURL, bIdRule, sOption, bContext):
    "error for Writer (LO/OO)"
    xErr = SingleProofreadingError()
    #xErr = uno.createUnoStruct( "com.sun.star.linguistic2.SingleProofreadingError" )
    xErr.nErrorStart = nOffset + m.start(iGroup)
    xErr.nErrorLength = m.end(iGroup) - m.start(iGroup)
    xErr.nErrorType = PROOFREADING
    xErr.aRuleIdentifier = sRuleId
    # suggestions
    if sRepl[0:1] == "=":
        sugg = globals()[sRepl[1:]](s, m)
        if sugg:
            if bUppercase and m.group(iGroup)[0:1].isupper():
                xErr.aSuggestions = tuple(map(str.capitalize, sugg.split("|")))
            else:
                xErr.aSuggestions = tuple(sugg.split("|"))
        else:
            xErr.aSuggestions = ()
    elif sRepl == "_":
        xErr.aSuggestions = ()
    else:
        if bUppercase and m.group(iGroup)[0:1].isupper():
            xErr.aSuggestions = tuple(map(str.capitalize, m.expand(sRepl).split("|")))
        else:
            xErr.aSuggestions = tuple(m.expand(sRepl).split("|"))
    # Message
    if sMsg[0:1] == "=":
        sMessage = globals()[sMsg[1:]](s, m)
    else:
        sMessage = m.expand(sMsg)
    xErr.aShortComment = sMessage   # sMessage.split("|")[0]     # in context menu
    xErr.aFullComment = sMessage   # sMessage.split("|")[-1]    # in dialog
    if bIdRule:
        xErr.aShortComment += "  # " + sLineId + " # " + sRuleId
    # URL
    if sURL:
        p = PropertyValue()
        p.Name = "FullCommentURL"
        p.Value = sURL
        xErr.aProperties = (p,)
    else:
        xErr.aProperties = ()
    return xErr


def _createDictError (s, sx, sRepl, nOffset, m, iGroup, sLineId, sRuleId, bUppercase, sMsg, sURL, bIdRule, sOption, bContext):
    "error as a dictionary"
    dErr = {}
    dErr["nStart"] = nOffset + m.start(iGroup)
    dErr["nEnd"] = nOffset + m.end(iGroup)
    dErr["sLineId"] = sLineId
    dErr["sRuleId"] = sRuleId
    dErr["sType"] = sOption  if sOption  else "notype"
    # suggestions
    if sRepl[0:1] == "=":
        sugg = globals()[sRepl[1:]](s, m)
        if sugg:
            if bUppercase and m.group(iGroup)[0:1].isupper():
                dErr["aSuggestions"] = list(map(str.capitalize, sugg.split("|")))
            else:
                dErr["aSuggestions"] = sugg.split("|")
        else:
            dErr["aSuggestions"] = ()
    elif sRepl == "_":
        dErr["aSuggestions"] = ()
    else:
        if bUppercase and m.group(iGroup)[0:1].isupper():
            dErr["aSuggestions"] = list(map(str.capitalize, m.expand(sRepl).split("|")))
        else:
            dErr["aSuggestions"] = m.expand(sRepl).split("|")
    # Message
    if sMsg[0:1] == "=":
        sMessage = globals()[sMsg[1:]](s, m)
    else:
        sMessage = m.expand(sMsg)
    dErr["sMessage"] = sMessage
    if bIdRule:
        dErr["sMessage"] += "  # " + sLineId + " # " + sRuleId
    # URL
    dErr["URL"] = sURL  if sURL  else ""
    # Context
    if bContext:
        dErr['sUnderlined'] = sx[m.start(iGroup):m.end(iGroup)]
        dErr['sBefore'] = sx[max(0,m.start(iGroup)-80):m.start(iGroup)]
        dErr['sAfter'] = sx[m.end(iGroup):m.end(iGroup)+80]
    return dErr


def _rewrite (s, sRepl, iGroup, m, bUppercase):
    "text processor: write sRepl in s at iGroup position"
    nLen = m.end(iGroup) - m.start(iGroup)
    if sRepl == "*":
        sNew = " " * nLen
    elif sRepl == ">" or sRepl == "_" or sRepl == "~":
        sNew = sRepl + " " * (nLen-1)
    elif sRepl == "@":
        sNew = "@" * nLen
    elif sRepl[0:1] == "=":
        sNew = globals()[sRepl[1:]](s, m)
        sNew = sNew + " " * (nLen-len(sNew))
        if bUppercase and m.group(iGroup)[0:1].isupper():
            sNew = sNew.capitalize()
    else:
        sNew = m.expand(sRepl)
        sNew = sNew + " " * (nLen-len(sNew))
    return s[0:m.start(iGroup)] + sNew + s[m.end(iGroup):]


def ignoreRule (sRuleId):
    _aIgnoredRules.add(sRuleId)


def resetIgnoreRules ():
    _aIgnoredRules.clear()


def reactivateRule (sRuleId):
    _aIgnoredRules.discard(sRuleId)



def listRules (sFilter=None):
    "generator: returns typle (sOption, sLineId, sRuleId)"
    if sFilter:
        try:
            zFilter = re.compile(sFilter)
        except:
            echo("# Error. List rules: wrong regex.")
            sFilter = None
    for sOption, lRuleGroup in chain(_getRules(True), _getRules(False)):
        for _, _, sLineId, sRuleId, _, _ in lRuleGroup:
            if not sFilter or zFilter.search(sRuleId):
                yield (sOption, sLineId, sRuleId)


def displayRules (sFilter=None):
    echo("List of rules. Filter: << " + str(sFilter) + " >>")
    for sOption, sLineId, sRuleId in listRules(sFilter):
        echo("{:<10} {:<10} {}".format(sOption, sLineId, sRuleId))


#### init

try:
    # LibreOffice / OpenOffice
    from com.sun.star.linguistic2 import SingleProofreadingError
    from com.sun.star.text.TextMarkupType import PROOFREADING
    from com.sun.star.beans import PropertyValue
    #import lightproof_handler_${implname} as opt
    _createError = _createWriterError
except ImportError:
    _createError = _createDictError


def load (sContext="Python"):

    global _oSpellChecker
    global _sAppContext
    global _dOptions

    try:
        _oSpellChecker = SpellChecker("${lang}", "${dic_main_filename_py}", "${dic_extended_filename_py}", "${dic_community_filename_py}", "${dic_personal_filename_py}")
        _sAppContext = sContext
        _dOptions = dict(gc_options.getOptions(sContext))   # duplication necessary, to be able to reset to default


    except:
        traceback.print_exc()


def setOption (sOpt, bVal):
    if sOpt in _dOptions:
        _dOptions[sOpt] = bVal


def setOptions (dOpt):
    for sKey, bVal in dOpt.items():
        if sKey in _dOptions:
            _dOptions[sKey] = bVal


def getOptions ():
    return _dOptions


def getDefaultOptions ():
    return dict(gc_options.getOptions(_sAppContext))


def getOptionsLabels (sLang):
    return gc_options.getUI(sLang)


def displayOptions (sLang):
    echo("List of options")
    echo("\n".join( [ k+":\t"+str(v)+"\t"+gc_options.getUI(sLang).get(k, ("?", ""))[0]  for k, v  in sorted(_dOptions.items()) ] ))
    echo("")


def resetOptions ():
    global _dOptions
    _dOptions = dict(gc_options.getOptions(_sAppContext))


def getSpellChecker ():

    return _oSpellChecker




def _getRules (bParagraph):
    try:
        if not bParagraph:
            return _rules.lSentenceRules
        return _rules.lParagraphRules
    except:
................................................................................
    if not bParagraph:
        return _rules.lSentenceRules
    return _rules.lParagraphRules


def _loadRules ():
    from . import gc_rules

    global _rules

    _rules = gc_rules

    # compile rules regex
    for lRuleGroup in chain(_rules.lParagraphRules, _rules.lSentenceRules):

        for rule in lRuleGroup[1]:
            try:
                rule[0] = re.compile(rule[0])
            except:
                echo("Bad regular expression in # " + str(rule[2]))
                rule[0] = "(?i)<Grammalecte>"


def _getPath ():
    return os.path.join(os.path.dirname(sys.modules[__name__].__file__), __name__ + ".py")









#### common functions

# common regexes
_zEndOfSentence = re.compile('([.?!:;…][ .?!… »”")]*|.$)')
_zBeginOfParagraph = re.compile("^\W*")









_zEndOfParagraph = re.compile("\W*$")
_zNextWord = re.compile(" +(\w[\w-]*)")
_zPrevWord = re.compile("(\w[\w-]*) +$")



















def option (sOpt):
    "return True if option sOpt is active"














    return _dOptions.get(sOpt, False)


def displayInfo (dDA, tWord):
    "for debugging: retrieve info of word"
    if not tWord:
        echo("> nothing to find")






        return True
    if tWord[1] not in _dAnalyses and not _storeMorphFromFSA(tWord[1]):







        echo("> not in FSA")
        return True
    if tWord[0] in dDA:
        echo("DA: " + str(dDA[tWord[0]]))
    echo("FSA: " + str(_dAnalyses[tWord[1]]))
    return True


def _storeMorphFromFSA (sWord):
    "retrieves morphologies list from _oSpellChecker -> _dAnalyses"


    global _dAnalyses
    _dAnalyses[sWord] = _oSpellChecker.getMorph(sWord)
    return True  if _dAnalyses[sWord]  else False



def morph (dDA, tWord, sPattern, bStrict=True, bNoWord=False):
    "analyse a tuple (position, word), return True if sPattern in morphologies (disambiguation on)"
    if not tWord:

















































        return bNoWord
    if tWord[1] not in _dAnalyses and not _storeMorphFromFSA(tWord[1]):







































        return False
    lMorph = dDA[tWord[0]]  if tWord[0] in dDA  else _dAnalyses[tWord[1]]






































    if not lMorph:






        return False
    p = re.compile(sPattern)





    if bStrict:
        return all(p.search(s)  for s in lMorph)
    return any(p.search(s)  for s in lMorph)
































































def morphex (dDA, tWord, sPattern, sNegPattern, bNoWord=False):
    "analyse a tuple (position, word), returns True if not sNegPattern in word morphologies and sPattern in word morphologies (disambiguation on)"
    if not tWord:
        return bNoWord
    if tWord[1] not in _dAnalyses and not _storeMorphFromFSA(tWord[1]):




























        return False
    lMorph = dDA[tWord[0]]  if tWord[0] in dDA  else _dAnalyses[tWord[1]]
    # check negative condition


    np = re.compile(sNegPattern)
    if any(np.search(s)  for s in lMorph):
        return False

    # search sPattern
    p = re.compile(sPattern)
    return any(p.search(s)  for s in lMorph)








































































def analyse (sWord, sPattern, bStrict=True):
    "analyse a word, return True if sPattern in morphologies (disambiguation off)"
    if sWord not in _dAnalyses and not _storeMorphFromFSA(sWord):


        return False
    if not _dAnalyses[sWord]:
























































































        return False
    p = re.compile(sPattern)
















    if bStrict:
        return all(p.search(s)  for s in _dAnalyses[sWord])
    return any(p.search(s)  for s in _dAnalyses[sWord])







def analysex (sWord, sPattern, sNegPattern):
    "analyse a word, returns True if not sNegPattern in word morphologies and sPattern in word morphologies (disambiguation off)"
    if sWord not in _dAnalyses and not _storeMorphFromFSA(sWord):







        return False
    # check negative condition
    np = re.compile(sNegPattern)
    if any(np.search(s)  for s in _dAnalyses[sWord]):





























        return False
    # search sPattern
    p = re.compile(sPattern)
    return any(p.search(s)  for s in _dAnalyses[sWord])












def stem (sWord):
    "returns a list of sWord's stems"
    if not sWord:




        return []
    if sWord not in _dAnalyses and not _storeMorphFromFSA(sWord):





























































        return []
    return [ s[1:s.find(" ")]  for s in _dAnalyses[sWord] ]




































































## functions to get text outside pattern scope

# warning: check compile_rules.py to understand how it works




def nextword (s, iStart, n):
    "get the nth word of the input string or empty string"
    m = re.match("(?: +[\\w%-]+){" + str(n-1) + "} +([\\w%-]+)", s[iStart:])
    if not m:
        return None
    return (iStart+m.start(1), m.group(1))

................................................................................
    if sNegPattern and re.search(sNegPattern, s):
        return False
    if re.search(sPattern, s):
        return True
    return False


def look_chk1 (dDA, s, nOffset, sPattern, sPatternGroup1, sNegPatternGroup1=None):
    "returns True if s has pattern sPattern and m.group(1) has pattern sPatternGroup1"
    m = re.search(sPattern, s)
    if not m:
        return False
    try:
        sWord = m.group(1)
        nPos = m.start(1) + nOffset
    except:
        return False






























    if sNegPatternGroup1:
        return morphex(dDA, (nPos, sWord), sPatternGroup1, sNegPatternGroup1)
    return morph(dDA, (nPos, sWord), sPatternGroup1, False)








































































































































































#### Disambiguator

def select (dDA, nPos, sWord, sPattern, lDefault=None):

    if not sWord:
        return True
    if nPos in dDA:

        return True
    if sWord not in _dAnalyses and not _storeMorphFromFSA(sWord):


        return True
    if len(_dAnalyses[sWord]) == 1:
        return True
    lSelect = [ sMorph  for sMorph in _dAnalyses[sWord]  if re.search(sPattern, sMorph) ]
    if lSelect:
        if len(lSelect) != len(_dAnalyses[sWord]):
            dDA[nPos] = lSelect
            #echo("= "+sWord+" "+str(dDA.get(nPos, "null")))
    elif lDefault:
        dDA[nPos] = lDefault
        #echo("= "+sWord+" "+str(dDA.get(nPos, "null")))
    return True


def exclude (dDA, nPos, sWord, sPattern, lDefault=None):

    if not sWord:
        return True
    if nPos in dDA:

        return True
    if sWord not in _dAnalyses and not _storeMorphFromFSA(sWord):


        return True
    if len(_dAnalyses[sWord]) == 1:






        return True





















    lSelect = [ sMorph  for sMorph in _dAnalyses[sWord]  if not re.search(sPattern, sMorph) ]
    if lSelect:
        if len(lSelect) != len(_dAnalyses[sWord]):
            dDA[nPos] = lSelect
            #echo("= "+sWord+" "+str(dDA.get(nPos, "null")))
    elif lDefault:









        dDA[nPos] = lDefault
        #echo("= "+sWord+" "+str(dDA.get(nPos, "null")))


    return True










def define (dDA, nPos, lMorph):


    dDA[nPos] = lMorph
    #echo("= "+str(nPos)+" "+str(dDA[nPos]))

    return True











#### GRAMMAR CHECKER PLUGINS

${plugins}




${callables}





>
|
|
>









>



>
>
>
>
>
>
>
>
>
>







 







>

>

<
>


<

<
<
<
<
<
<
<
<
<
|
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
|


<
<

>

<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<

>



>




>
>




<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<

>


>
>







 







>

>

>

|
>
|
|
|
|
|
|


<
<
>
>
>


>
>
>

<

<
<
<
>
>
>
>
>
>
>
>
>
|
<
<
>
>
>
>
>
>
>
>


>
>
>
>
>
>
>
>
>
|
<
>
>
>
>
>
>
>
>
>
>
>
>
>
>
|


<
<
<
<
>
>
>
>
>
>
|
<
>
>
>
>
>
>
>
|
<
<
<
<
<


<
<
>
>
|
<
<
>


<
<
<
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
|
<
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
|
<
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
|
>
>
>
>
>
>
|
<
>
>
>
>
>
|
<
<
>
>
>
>

>
>
>
>
>
>
>
>
>
>
>
>
>
>

>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
|
<
<
<
<
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
|
<
<
>
>
|
<
<
>
|
<
<
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>

>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>

<
<
<
>
>
|
<
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
|
<
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
|
<
<
>
|
>
>
>
>

<
<
<
>
>
>
>
>
>
>
|
<
<
<
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
|
<
<
<

>
>
>
>
>
>
>
>
>
>
|
<
<
<
>
>
>
>
|
<
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
|
<
>
>
>
>
>
>
>

>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
|
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
|



>
>
>







 







|









>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
|
<
<
>
>
>
>
>
>
>
>
>
>
>


>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
|

|
>


|
>

<
>
>

<
<
|

|
|
<

|
<



|
>


|
>

<
>
>

<
>
>
>
>
>
>
|
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
|

|
|
<

>
>
>
>
>
>
>
>
>
|
<
>
>
|
>
>
>
>
>
>
>
>


<
>
>
|
<
>


>
>
>
>
>
>
>
>
>






>
>

>
>
>
>
>
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
..
37
38
39
40
41
42
43
44
45
46
47

48
49
50

51









52










































































































































































































53
54
55


56
57
58

































59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74


































75
76
77
78
79
80
81
82
83
84
85
86
87
..
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111


112
113
114
115
116
117
118
119
120

121



122
123
124
125
126
127
128
129
130
131


132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151

152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168




169
170
171
172
173
174
175

176
177
178
179
180
181
182
183





184
185


186
187
188


189
190
191



192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241

242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281

282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327

328
329
330
331
332
333


334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398




399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427


428
429
430


431
432


433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504



505
506
507

508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596

597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613


614
615
616
617
618
619
620



621
622
623
624
625
626
627
628



629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658



659
660
661
662
663
664
665
666
667
668
669
670



671
672
673
674
675

676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737

738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
...
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894


895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071

1072
1073
1074


1075
1076
1077
1078

1079
1080

1081
1082
1083
1084
1085
1086
1087
1088
1089
1090

1091
1092
1093

1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125

1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136

1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149

1150
1151
1152

1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
"""
Grammalecte
Grammar checker engine
"""

import re
import sys
import os
import traceback
#import unicodedata
from itertools import chain

from ..graphspell.spellchecker import SpellChecker
from ..graphspell.tokenizer import Tokenizer
from ..graphspell.echo import echo
from . import gc_options

try:
    # LibreOffice / OpenOffice
    from com.sun.star.linguistic2 import SingleProofreadingError
    from com.sun.star.text.TextMarkupType import PROOFREADING
    from com.sun.star.beans import PropertyValue
    #import lightproof_handler_${implname} as opt
    _bWriterError = True
except ImportError:
    _bWriterError = False


__all__ = [ "lang", "locales", "pkg", "name", "version", "author", \
            "load", "parse", "getSpellChecker", \
            "setOption", "setOptions", "getOptions", "getDefaultOptions", "getOptionsLabels", "resetOptions", "displayOptions", \
            "ignoreRule", "resetIgnoreRules", "reactivateRule", "listRules", "displayRules" ]

__version__ = "${version}"
................................................................................
lang = "${lang}"
locales = ${loc}
pkg = "${implname}"
name = "${name}"
version = "${version}"
author = "${author}"

# Modules
_rules = None                               # module gc_rules
_rules_graph = None                         # module gc_rules_graph


# Data
_sAppContext = ""                           # what software is running
_dOptions = None

_oSpellChecker = None









_oTokenizer = None










































































































































































































_aIgnoredRules = set()





#### Initialization


































def load (sContext="Python"):
    "initialization of the grammar checker"
    global _oSpellChecker
    global _sAppContext
    global _dOptions
    global _oTokenizer
    try:
        _oSpellChecker = SpellChecker("${lang}", "${dic_main_filename_py}", "${dic_extended_filename_py}", "${dic_community_filename_py}", "${dic_personal_filename_py}")
        _sAppContext = sContext
        _dOptions = dict(gc_options.getOptions(sContext))   # duplication necessary, to be able to reset to default
        _oTokenizer = _oSpellChecker.getTokenizer()
        _oSpellChecker.activateStorage()
    except:
        traceback.print_exc()




































def getSpellChecker ():
    "return the spellchecker object"
    return _oSpellChecker


#### Rules

def _getRules (bParagraph):
    try:
        if not bParagraph:
            return _rules.lSentenceRules
        return _rules.lParagraphRules
    except:
................................................................................
    if not bParagraph:
        return _rules.lSentenceRules
    return _rules.lParagraphRules


def _loadRules ():
    from . import gc_rules
    from . import gc_rules_graph
    global _rules
    global _rules_graph
    _rules = gc_rules
    _rules_graph = gc_rules_graph
    # compile rules regex
    for sOption, lRuleGroup in chain(_rules.lParagraphRules, _rules.lSentenceRules):
        if sOption != "@@@@":
            for aRule in lRuleGroup:
                try:
                    aRule[0] = re.compile(aRule[0])
                except:
                    echo("Bad regular expression in # " + str(aRule[2]))
                    aRule[0] = "(?i)<Grammalecte>"




def ignoreRule (sRuleId):
    "disable rule <sRuleId>"
    _aIgnoredRules.add(sRuleId)


def resetIgnoreRules ():
    "clear all ignored rules"
    _aIgnoredRules.clear()






def reactivateRule (sRuleId):
    "(re)activate rule <sRuleId>"
    _aIgnoredRules.discard(sRuleId)


def listRules (sFilter=None):
    "generator: returns typle (sOption, sLineId, sRuleId)"
    if sFilter:
        try:
            zFilter = re.compile(sFilter)


        except:
            echo("# Error. List rules: wrong regex.")
            sFilter = None
    for sOption, lRuleGroup in chain(_getRules(True), _getRules(False)):
        if sOption != "@@@@":
            for _, _, sLineId, sRuleId, _, _ in lRuleGroup:
                if not sFilter or zFilter.search(sRuleId):
                    yield (sOption, sLineId, sRuleId)


def displayRules (sFilter=None):
    "display the name of rules, with the filter <sFilter>"
    echo("List of rules. Filter: << " + str(sFilter) + " >>")
    for sOption, sLineId, sRuleId in listRegexRules(sFilter):
        echo("{:<10} {:<10} {}".format(sOption, sLineId, sRuleId))


#### Options

def setOption (sOpt, bVal):

    "set option <sOpt> with <bVal> if it exists"
    if sOpt in _dOptions:
        _dOptions[sOpt] = bVal


def setOptions (dOpt):
    "update the dictionary of options with <dOpt>"
    for sKey, bVal in dOpt.items():
        if sKey in _dOptions:
            _dOptions[sKey] = bVal


def getOptions ():
    "return the dictionary of current options"
    return _dOptions






def getDefaultOptions ():
    "return the dictionary of default options"
    return dict(gc_options.getOptions(_sAppContext))


def getOptionsLabels (sLang):
    "return options labels"

    return gc_options.getUI(sLang)


def displayOptions (sLang):
    "display the list of grammar checking options"
    echo("List of options")
    echo("\n".join( [ k+":\t"+str(v)+"\t"+gc_options.getUI(sLang).get(k, ("?", ""))[0]  for k, v  in sorted(_dOptions.items()) ] ))
    echo("")









def resetOptions ():
    "set options to default values"
    global _dOptions


    _dOptions = dict(gc_options.getOptions(_sAppContext))





#### Parsing

_zEndOfSentence = re.compile(r'([.?!:;…][ .?!… »”")]*|.$)')
_zBeginOfParagraph = re.compile(r"^\W*")
_zEndOfParagraph = re.compile(r"\W*$")

def _getSentenceBoundaries (sText):
    iStart = _zBeginOfParagraph.match(sText).end()
    for m in _zEndOfSentence.finditer(sText):
        yield (iStart, m.end())
        iStart = m.end()


def parse (sText, sCountry="${country_default}", bDebug=False, dOptions=None, bContext=False):
    "init point to analyze a text"
    oText = TextParser(sText)
    return oText.parse(sCountry, bDebug, dOptions, bContext)


#### TEXT PARSER

class TextParser:
    "Text parser"

    def __init__ (self, sText):
        self.sText = sText
        self.sText0 = sText
        self.sSentence = ""
        self.sSentence0 = ""
        self.nOffsetWithinParagraph = 0
        self.lToken = []
        self.dTokenPos = {}
        self.dTags = {}
        self.dError = {}
        self.dErrorPriority = {}  # Key = position; value = priority

    def __str__ (self):
        s = "===== TEXT =====\n"
        s += "sentence: " + self.sSentence0 + "\n"
        s += "now:      " + self.sSentence  + "\n"
        for dToken in self.lToken:
            s += '#{i}\t{nStart}:{nEnd}\t{sValue}\t{sType}'.format(**dToken)
            if "lMorph" in dToken:
                s += "\t" + str(dToken["lMorph"])
            if "aTags" in dToken:
                s += "\t" + str(dToken["aTags"])
            s += "\n"
        #for nPos, dToken in self.dTokenPos.items():
        #    s += "{}\t{}\n".format(nPos, dToken)
        return s


    def parse (self, sCountry="${country_default}", bDebug=False, dOptions=None, bContext=False):
        "analyses the paragraph sText and returns list of errors"
        #sText = unicodedata.normalize("NFC", sText)
        dOpt = dOptions or _dOptions
        bShowRuleId = option('idrule')

        # parse paragraph
        try:
            self.parseText(self.sText, self.sText0, True, 0, sCountry, dOpt, bShowRuleId, bDebug, bContext)
        except:
            raise

        # cleanup
        sText = self.sText
        if " " in sText:
            sText = sText.replace(" ", ' ') # nbsp
        if " " in sText:
            sText = sText.replace(" ", ' ') # nnbsp
        if "'" in sText:
            sText = sText.replace("'", "’")
        if "‑" in sText:
            sText = sText.replace("‑", "-") # nobreakdash

        # parse sentences
        for iStart, iEnd in _getSentenceBoundaries(sText):
            if 4 < (iEnd - iStart) < 2000:
                try:
                    self.sSentence = sText[iStart:iEnd]
                    self.sSentence0 = self.sText0[iStart:iEnd]
                    self.nOffsetWithinParagraph = iStart
                    self.lToken = list(_oTokenizer.genTokens(self.sSentence, True))
                    self.dTokenPos = { dToken["nStart"]: dToken  for dToken in self.lToken  if dToken["sType"] != "INFO" }
                    self.parseText(self.sSentence, self.sSentence0, False, iStart, sCountry, dOpt, bShowRuleId, bDebug, bContext)
                except:
                    raise
        return self.dError.values() # this is a view (iterable)

    def parseText (self, sText, sText0, bParagraph, nOffset, sCountry, dOptions, bShowRuleId, bDebug, bContext):
        bChange = False

        for sOption, lRuleGroup in _getRules(bParagraph):
            if sOption == "@@@@":
                # graph rules
                if not bParagraph and bChange:
                    self.update(sText, bDebug)
                    bChange = False
                for sGraphName, sLineId in lRuleGroup:
                    if sGraphName not in dOptions or dOptions[sGraphName]:
                        if bDebug:
                            echo("\n>>>> GRAPH: " + sGraphName + " " + sLineId)
                        sText = self.parseGraph(_rules_graph.dAllGraph[sGraphName], sCountry, dOptions, bShowRuleId, bDebug, bContext)
            elif not sOption or dOptions.get(sOption, False):
                # regex rules
                for zRegex, bUppercase, sLineId, sRuleId, nPriority, lActions in lRuleGroup:
                    if sRuleId not in _aIgnoredRules:
                        for m in zRegex.finditer(sText):
                            bCondMemo = None
                            for sFuncCond, cActionType, sWhat, *eAct in lActions:
                                # action in lActions: [ condition, action type, replacement/suggestion/action[, iGroup[, message, URL]] ]
                                try:
                                    bCondMemo = not sFuncCond or globals()[sFuncCond](sText, sText0, m, self.dTokenPos, sCountry, bCondMemo)
                                    if bCondMemo:
                                        if bDebug:
                                            echo("RULE: " + sLineId)
                                        if cActionType == "-":
                                            # grammar error
                                            nErrorStart = nOffset + m.start(eAct[0])
                                            if nErrorStart not in self.dError or nPriority > self.dErrorPriority.get(nErrorStart, -1):
                                                self.dError[nErrorStart] = self._createErrorFromRegex(sText, sText0, sWhat, nOffset, m, eAct[0], sLineId, sRuleId, bUppercase, eAct[1], eAct[2], bShowRuleId, sOption, bContext)
                                                self.dErrorPriority[nErrorStart] = nPriority
                                        elif cActionType == "~":
                                            # text processor
                                            sText = self.rewriteText(sText, sWhat, eAct[0], m, bUppercase)
                                            bChange = True
                                            if bDebug:
                                                echo("~ " + sText + "  -- " + m.group(eAct[0]) + "  # " + sLineId)
                                        elif cActionType == "=":
                                            # disambiguation
                                            if not bParagraph:
                                                globals()[sWhat](sText, m, self.dTokenPos)
                                                if bDebug:
                                                    echo("= " + m.group(0) + "  # " + sLineId)
                                        elif cActionType == ">":
                                            # we do nothing, this test is just a condition to apply all following actions
                                            pass
                                        else:

                                            echo("# error: unknown action at " + sLineId)
                                    elif cActionType == ">":
                                        break
                                except Exception as e:
                                    raise Exception(str(e), "# " + sLineId + " # " + sRuleId)
        if bChange:


            if bParagraph:
                self.sText = sText
            else:
                self.sSentence = sText

    def update (self, sSentence, bDebug=False):
        "update <sSentence> and retokenize"
        self.sSentence = sSentence
        lNewToken = list(_oTokenizer.genTokens(sSentence, True))
        for dToken in lNewToken:
            if "lMorph" in self.dTokenPos.get(dToken["nStart"], {}):
                dToken["lMorph"] = self.dTokenPos[dToken["nStart"]]["lMorph"]
            if "aTags" in self.dTokenPos.get(dToken["nStart"], {}):
                dToken["aTags"] = self.dTokenPos[dToken["nStart"]]["aTags"]
        self.lToken = lNewToken
        self.dTokenPos = { dToken["nStart"]: dToken  for dToken in self.lToken  if dToken["sType"] != "INFO" }
        if bDebug:
            echo("UPDATE:")
            echo(self)

    def _getNextPointers (self, dToken, dGraph, dPointer, bDebug=False):
        "generator: return nodes where <dToken> “values” match <dNode> arcs"
        dNode = dPointer["dNode"]
        iNode1 = dPointer["iNode1"]
        bTokenFound = False
        # token value
        if dToken["sValue"] in dNode:
            if bDebug:
                echo("  MATCH: " + dToken["sValue"])
            yield { "iNode1": iNode1, "dNode": dGraph[dNode[dToken["sValue"]]] }
            bTokenFound = True
        if dToken["sValue"][0:2].istitle(): # we test only 2 first chars, to make valid words such as "Laissez-les", "Passe-partout".
            sValue = dToken["sValue"].lower()
            if sValue in dNode:
                if bDebug:
                    echo("  MATCH: " + sValue)
                yield { "iNode1": iNode1, "dNode": dGraph[dNode[sValue]] }
                bTokenFound = True
        elif dToken["sValue"].isupper():
            sValue = dToken["sValue"].lower()
            if sValue in dNode:
                if bDebug:
                    echo("  MATCH: " + sValue)
                yield { "iNode1": iNode1, "dNode": dGraph[dNode[sValue]] }
                bTokenFound = True
            sValue = dToken["sValue"].capitalize()
            if sValue in dNode:
                if bDebug:
                    echo("  MATCH: " + sValue)
                yield { "iNode1": iNode1, "dNode": dGraph[dNode[sValue]] }
                bTokenFound = True
        # regex value arcs
        if dToken["sType"] not in frozenset(["INFO", "PUNC", "SIGN"]):
            if "<re_value>" in dNode:
                for sRegex in dNode["<re_value>"]:
                    if "¬" not in sRegex:
                        # no anti-pattern
                        if re.search(sRegex, dToken["sValue"]):
                            if bDebug:
                                echo("  MATCH: ~" + sRegex)
                            yield { "iNode1": iNode1, "dNode": dGraph[dNode["<re_value>"][sRegex]] }
                            bTokenFound = True
                    else:
                        # there is an anti-pattern
                        sPattern, sNegPattern = sRegex.split("¬", 1)




                        if sNegPattern and re.search(sNegPattern, dToken["sValue"]):
                            continue
                        if not sPattern or re.search(sPattern, dToken["sValue"]):
                            if bDebug:
                                echo("  MATCH: ~" + sRegex)
                            yield { "iNode1": iNode1, "dNode": dGraph[dNode["<re_value>"][sRegex]] }
                            bTokenFound = True
        # analysable tokens
        if dToken["sType"][0:4] == "WORD":
            # token lemmas
            if "<lemmas>" in dNode:
                for sLemma in _oSpellChecker.getLemma(dToken["sValue"]):
                    if sLemma in dNode["<lemmas>"]:
                        if bDebug:
                            echo("  MATCH: >" + sLemma)
                        yield { "iNode1": iNode1, "dNode": dGraph[dNode["<lemmas>"][sLemma]] }
                        bTokenFound = True
            # regex morph arcs
            if "<re_morph>" in dNode:
                lMorph = dToken.get("lMorph", _oSpellChecker.getMorph(dToken["sValue"]))
                for sRegex in dNode["<re_morph>"]:
                    if "¬" not in sRegex:
                        # no anti-pattern
                        if any(re.search(sRegex, sMorph)  for sMorph in lMorph):
                            if bDebug:
                                echo("  MATCH: @" + sRegex)
                            yield { "iNode1": iNode1, "dNode": dGraph[dNode["<re_morph>"][sRegex]] }
                            bTokenFound = True
                    else:


                        # there is an anti-pattern
                        sPattern, sNegPattern = sRegex.split("¬", 1)
                        if sNegPattern == "*":


                            # all morphologies must match with <sPattern>
                            if sPattern:


                                if lMorph and all(re.search(sPattern, sMorph)  for sMorph in lMorph):
                                    if bDebug:
                                        echo("  MATCH: @" + sRegex)
                                    yield { "iNode1": iNode1, "dNode": dGraph[dNode["<re_morph>"][sRegex]] }
                                    bTokenFound = True
                        else:
                            if sNegPattern and any(re.search(sNegPattern, sMorph)  for sMorph in lMorph):
                                continue
                            if not sPattern or any(re.search(sPattern, sMorph)  for sMorph in lMorph):
                                if bDebug:
                                    echo("  MATCH: @" + sRegex)
                                yield { "iNode1": iNode1, "dNode": dGraph[dNode["<re_morph>"][sRegex]] }
                                bTokenFound = True
        # token tags
        if "aTags" in dToken and "<tags>" in dNode:
            for sTag in dToken["aTags"]:
                if sTag in dNode["<tags>"]:
                    if bDebug:
                        echo("  MATCH: /" + sTag)
                    yield { "iNode1": iNode1, "dNode": dGraph[dNode["<tags>"][sTag]] }
                    bTokenFound = True
        # meta arc (for token type)
        if "<meta>" in dNode:
            for sMeta in dNode["<meta>"]:
                # no regex here, we just search if <dNode["sType"]> exists within <sMeta>
                if sMeta == "*" or dToken["sType"] == sMeta:
                    if bDebug:
                        echo("  MATCH: *" + sMeta)
                    yield { "iNode1": iNode1, "dNode": dGraph[dNode["<meta>"][sMeta]] }
                    bTokenFound = True
                elif "¬" in sMeta:
                    if dToken["sType"] not in sMeta:
                        if bDebug:
                            echo("  MATCH: *" + sMeta)
                        yield { "iNode1": iNode1, "dNode": dGraph[dNode["<meta>"][sMeta]] }
                        bTokenFound = True
        if not bTokenFound and "bKeep" in dPointer:
            yield dPointer
        # JUMP
        # Warning! Recurssion!
        if "<>" in dNode:
            dPointer2 = { "iNode1": iNode1, "dNode": dGraph[dNode["<>"]], "bKeep": True }
            yield from self._getNextPointers(dToken, dGraph, dPointer2, bDebug)

    def parseGraph (self, dGraph, sCountry="${country_default}", dOptions=None, bShowRuleId=False, bDebug=False, bContext=False):
        "parse graph with tokens from the text and execute actions encountered"
        lPointer = []
        bTagAndRewrite = False
        for iToken, dToken in enumerate(self.lToken):
            if bDebug:
                echo("TOKEN: " + dToken["sValue"])
            # check arcs for each existing pointer
            lNextPointer = []
            for dPointer in lPointer:
                lNextPointer.extend(self._getNextPointers(dToken, dGraph, dPointer, bDebug))
            lPointer = lNextPointer
            # check arcs of first nodes
            lPointer.extend(self._getNextPointers(dToken, dGraph, { "iNode1": iToken, "dNode": dGraph[0] }, bDebug))
            # check if there is rules to check for each pointer
            for dPointer in lPointer:
                #if bDebug:
                #    echo("+", dPointer)
                if "<rules>" in dPointer["dNode"]:
                    bChange = self._executeActions(dGraph, dPointer["dNode"]["<rules>"], dPointer["iNode1"]-1, iToken, dOptions, sCountry, bShowRuleId, bDebug, bContext)
                    if bChange:
                        bTagAndRewrite = True
        if bTagAndRewrite:
            self.rewriteFromTags(bDebug)
        if bDebug:
            echo(self)
        return self.sSentence




    def _executeActions (self, dGraph, dNode, nTokenOffset, nLastToken, dOptions, sCountry, bShowRuleId, bDebug, bContext):
        "execute actions found in the DARG"
        bChange = False

        for sLineId, nextNodeKey in dNode.items():
            bCondMemo = None
            for sRuleId in dGraph[nextNodeKey]:
                try:
                    if bDebug:
                        echo("   >TRY: " + sRuleId + " " + sLineId)
                    sOption, sFuncCond, cActionType, sWhat, *eAct = _rules_graph.dRule[sRuleId]
                    # Suggestion    [ option, condition, "-", replacement/suggestion/action, iTokenStart, iTokenEnd, cStartLimit, cEndLimit, bCaseSvty, nPriority, sMessage, sURL ]
                    # TextProcessor [ option, condition, "~", replacement/suggestion/action, iTokenStart, iTokenEnd, bCaseSvty ]
                    # Disambiguator [ option, condition, "=", replacement/suggestion/action ]
                    # Tag           [ option, condition, "/", replacement/suggestion/action, iTokenStart, iTokenEnd ]
                    # Immunity      [ option, condition, "%", "",                            iTokenStart, iTokenEnd ]
                    # Test          [ option, condition, ">", "" ]
                    if not sOption or dOptions.get(sOption, False):
                        bCondMemo = not sFuncCond or globals()[sFuncCond](self.lToken, nTokenOffset, nLastToken, sCountry, bCondMemo, self.dTags, self.sSentence, self.sSentence0)
                        if bCondMemo:
                            if cActionType == "-":
                                # grammar error
                                iTokenStart, iTokenEnd, cStartLimit, cEndLimit, bCaseSvty, nPriority, sMessage, sURL = eAct
                                nTokenErrorStart = nTokenOffset + iTokenStart  if iTokenStart > 0  else nLastToken + iTokenStart
                                if "bImmune" not in self.lToken[nTokenErrorStart]:
                                    nTokenErrorEnd = nTokenOffset + iTokenEnd  if iTokenEnd > 0  else nLastToken + iTokenEnd
                                    nErrorStart = self.nOffsetWithinParagraph + (self.lToken[nTokenErrorStart]["nStart"] if cStartLimit == "<"  else self.lToken[nTokenErrorStart]["nEnd"])
                                    nErrorEnd = self.nOffsetWithinParagraph + (self.lToken[nTokenErrorEnd]["nEnd"] if cEndLimit == ">"  else self.lToken[nTokenErrorEnd]["nStart"])
                                    if nErrorStart not in self.dError or nPriority > self.dErrorPriority.get(nErrorStart, -1):
                                        self.dError[nErrorStart] = self._createErrorFromTokens(sWhat, nTokenOffset, nLastToken, nTokenErrorStart, nErrorStart, nErrorEnd, sLineId, sRuleId, bCaseSvty, sMessage, sURL, bShowRuleId, sOption, bContext)
                                        self.dErrorPriority[nErrorStart] = nPriority
                                        if bDebug:
                                            echo("    NEW_ERROR: {}".format(self.dError[nErrorStart]))
                            elif cActionType == "~":
                                # text processor
                                nTokenStart = nTokenOffset + eAct[0]  if eAct[0] > 0  else nLastToken + eAct[0]
                                nTokenEnd = nTokenOffset + eAct[1]  if eAct[1] > 0  else nLastToken + eAct[1]
                                self._tagAndPrepareTokenForRewriting(sWhat, nTokenStart, nTokenEnd, nTokenOffset, nLastToken, eAct[2], bDebug)
                                bChange = True
                                if bDebug:
                                    echo("    TEXT_PROCESSOR: [{}:{}]  > {}".format(self.lToken[nTokenStart]["sValue"], self.lToken[nTokenEnd]["sValue"], sWhat))
                            elif cActionType == "=":
                                # disambiguation
                                globals()[sWhat](self.lToken, nTokenOffset, nLastToken)
                                if bDebug:
                                    echo("    DISAMBIGUATOR: ({})  [{}:{}]".format(sWhat, self.lToken[nTokenOffset+1]["sValue"], self.lToken[nLastToken]["sValue"]))
                            elif cActionType == ">":
                                # we do nothing, this test is just a condition to apply all following actions
                                if bDebug:
                                    echo("    COND_OK")
                                pass
                            elif cActionType == "/":
                                # Tag
                                nTokenStart = nTokenOffset + eAct[0]  if eAct[0] > 0  else nLastToken + eAct[0]
                                nTokenEnd = nTokenOffset + eAct[1]  if eAct[1] > 0  else nLastToken + eAct[1]
                                for i in range(nTokenStart, nTokenEnd+1):
                                    if "aTags" in self.lToken[i]:
                                        self.lToken[i]["aTags"].update(sWhat.split("|"))
                                    else:
                                        self.lToken[i]["aTags"] = set(sWhat.split("|"))
                                if bDebug:
                                    echo("    TAG: {} >  [{}:{}]".format(sWhat, self.lToken[nTokenStart]["sValue"], self.lToken[nTokenEnd]["sValue"]))
                                if sWhat not in self.dTags:
                                    self.dTags[sWhat] = [nTokenStart, nTokenStart]
                                else:
                                    self.dTags[sWhat][0] = min(nTokenStart, self.dTags[sWhat][0])
                                    self.dTags[sWhat][1] = max(nTokenEnd, self.dTags[sWhat][1])
                            elif cActionType == "%":
                                # immunity
                                if bDebug:
                                    echo("    IMMUNITY: " + _rules_graph.dRule[sRuleId])
                                nTokenStart = nTokenOffset + eAct[0]  if eAct[0] > 0  else nLastToken + eAct[0]
                                nTokenEnd = nTokenOffset + eAct[1]  if eAct[1] > 0  else nLastToken + eAct[1]
                                if nTokenEnd - nTokenStart == 0:
                                    self.lToken[nTokenStart]["bImmune"] = True
                                    nErrorStart = self.nOffsetWithinParagraph + self.lToken[nTokenStart]["nStart"]
                                    if nErrorStart in self.dError:
                                        del self.dError[nErrorStart]
                                else:
                                    for i in range(nTokenStart, nTokenEnd+1):
                                        self.lToken[i]["bImmune"] = True
                                        nErrorStart = self.nOffsetWithinParagraph + self.lToken[i]["nStart"]
                                        if nErrorStart in self.dError:
                                            del self.dError[nErrorStart]
                            else:
                                echo("# error: unknown action at " + sLineId)
                        elif cActionType == ">":
                            if bDebug:
                                echo("    COND_BREAK")
                            break
                except Exception as e:
                    raise Exception(str(e), sLineId, sRuleId, self.sSentence)
        return bChange


    def _createErrorFromRegex (self, sText, sText0, sRepl, nOffset, m, iGroup, sLineId, sRuleId, bUppercase, sMsg, sURL, bShowRuleId, sOption, bContext):
        nStart = nOffset + m.start(iGroup)
        nEnd = nOffset + m.end(iGroup)
        # suggestions
        if sRepl[0:1] == "=":
            sSugg = globals()[sRepl[1:]](sText, m)
            lSugg = sSugg.split("|")  if sSugg  else []
        elif sRepl == "_":
            lSugg = []
        else:
            lSugg = m.expand(sRepl).split("|")
        if bUppercase and lSugg and m.group(iGroup)[0:1].isupper():
            lSugg = list(map(str.capitalize, lSugg))
        # Message
        sMessage = globals()[sMsg[1:]](sText, m)  if sMsg[0:1] == "="  else  m.expand(sMsg)
        if bShowRuleId:


            sMessage += "  # " + sLineId + " # " + sRuleId
        #
        if _bWriterError:
            return self._createErrorForWriter(nStart, nEnd - nStart, sRuleId, sMessage, lSugg, sURL)
        else:
            return self._createErrorAsDict(nStart, nEnd, sLineId, sRuleId, sOption, sMessage, lSugg, sURL, bContext)




    def _createErrorFromTokens (self, sSugg, nTokenOffset, nLastToken, iFirstToken, nStart, nEnd, sLineId, sRuleId, bCaseSvty, sMsg, sURL, bShowRuleId, sOption, bContext):
        # suggestions
        if sSugg[0:1] == "=":
            sSugg = globals()[sSugg[1:]](self.lToken, nTokenOffset, nLastToken)
            lSugg = sSugg.split("|")  if sSugg  else []
        elif sSugg == "_":
            lSugg = []
        else:



            lSugg = self._expand(sSugg, nTokenOffset, nLastToken).split("|")
        if bCaseSvty and lSugg and self.lToken[iFirstToken]["sValue"][0:1].isupper():
            lSugg = list(map(lambda s: s[0:1].upper()+s[1:], lSugg))
        # Message
        sMessage = globals()[sMsg[1:]](self.lToken, nTokenOffset, nLastToken)  if sMsg[0:1] == "="  else self._expand(sMsg, nTokenOffset, nLastToken)
        if bShowRuleId:
            sMessage += "  " + sLineId + " # " + sRuleId
        #
        if _bWriterError:
            return self._createErrorForWriter(nStart, nEnd - nStart, sRuleId, sMessage, lSugg, sURL)
        else:
            return self._createErrorAsDict(nStart, nEnd, sLineId, sRuleId, sOption, sMessage, lSugg, sURL, bContext)

    def _createErrorForWriter (self, nStart, nLen, sRuleId, sMessage, lSugg, sURL):
        xErr = SingleProofreadingError()    # uno.createUnoStruct( "com.sun.star.linguistic2.SingleProofreadingError" )
        xErr.nErrorStart = nStart
        xErr.nErrorLength = nLen
        xErr.nErrorType = PROOFREADING
        xErr.aRuleIdentifier = sRuleId
        xErr.aShortComment = sMessage   # sMessage.split("|")[0]     # in context menu
        xErr.aFullComment = sMessage    # sMessage.split("|")[-1]    # in dialog
        xErr.aSuggestions = tuple(lSugg)
        #xPropertyLineType = PropertyValue(Name="LineType", Value=5) # DASH or WAVE
        #xPropertyLineColor = PropertyValue(Name="LineColor", Value=getRGB("FFAA00"))
        if sURL:
            xPropertyURL = PropertyValue(Name="FullCommentURL", Value=sURL)
            xErr.aProperties = (xPropertyURL,)
        else:
            xErr.aProperties = ()
        return xErr




    def _createErrorAsDict (self, nStart, nEnd, sLineId, sRuleId, sOption, sMessage, lSugg, sURL, bContext):
        dErr = {
            "nStart": nStart,
            "nEnd": nEnd,
            "sLineId": sLineId,
            "sRuleId": sRuleId,
            "sType": sOption  if sOption  else "notype",
            "sMessage": sMessage,
            "aSuggestions": lSugg,
            "URL": sURL
        }



        if bContext:
            dErr['sUnderlined'] = self.sText0[nStart:nEnd]
            dErr['sBefore'] = self.sText0[max(0,nStart-80):nStart]
            dErr['sAfter'] = self.sText0[nEnd:nEnd+80]
        return dErr


    def _expand (self, sText, nTokenOffset, nLastToken):
        for m in re.finditer(r"\\(-?[0-9]+)", sText):
            if m.group(1)[0:1] == "-":
                sText = sText.replace(m.group(0), self.lToken[nLastToken+int(m.group(1))+1]["sValue"])
            else:
                sText = sText.replace(m.group(0), self.lToken[nTokenOffset+int(m.group(1))]["sValue"])
        return sText

    def rewriteText (self, sText, sRepl, iGroup, m, bUppercase):
        "text processor: write <sRepl> in <sText> at <iGroup> position"
        nLen = m.end(iGroup) - m.start(iGroup)
        if sRepl == "*":
            sNew = " " * nLen
        elif sRepl == "_":
            sNew = sRepl + " " * (nLen-1)
        elif sRepl[0:1] == "=":
            sNew = globals()[sRepl[1:]](sText, m)
            sNew = sNew + " " * (nLen-len(sNew))
            if bUppercase and m.group(iGroup)[0:1].isupper():
                sNew = sNew.capitalize()
        else:
            sNew = m.expand(sRepl)
            sNew = sNew + " " * (nLen-len(sNew))
        return sText[0:m.start(iGroup)] + sNew + sText[m.end(iGroup):]

    def _tagAndPrepareTokenForRewriting (self, sWhat, nTokenRewriteStart, nTokenRewriteEnd, nTokenOffset, nLastToken, bCaseSvty, bDebug):
        "text processor: rewrite tokens between <nTokenRewriteStart> and <nTokenRewriteEnd> position"
        if sWhat == "*":
            # purge text
            if nTokenRewriteEnd - nTokenRewriteStart == 0:
                self.lToken[nTokenRewriteStart]["bToRemove"] = True
            else:
                for i in range(nTokenRewriteStart, nTokenRewriteEnd+1):
                    self.lToken[i]["bToRemove"] = True
        elif sWhat == "␣":
            # merge tokens
            self.lToken[nTokenRewriteStart]["nMergeUntil"] = nTokenRewriteEnd
        elif sWhat == "_":
            # neutralized token
            if nTokenRewriteEnd - nTokenRewriteStart == 0:
                self.lToken[nTokenRewriteStart]["sNewValue"] = "_"
            else:
                for i in range(nTokenRewriteStart, nTokenRewriteEnd+1):
                    self.lToken[i]["sNewValue"] = "_"
        else:
            if sWhat.startswith("="):
                sWhat = globals()[sWhat[1:]](self.lToken, nTokenOffset, nLastToken)
            else:
                sWhat = self._expand(sWhat, nTokenOffset, nLastToken)
            bUppercase = bCaseSvty and self.lToken[nTokenRewriteStart]["sValue"][0:1].isupper()
            if nTokenRewriteEnd - nTokenRewriteStart == 0:
                # one token
                if bUppercase:
                    sWhat = sWhat[0:1].upper() + sWhat[1:]
                self.lToken[nTokenRewriteStart]["sNewValue"] = sWhat
            else:
                # several tokens
                lTokenValue = sWhat.split("|")
                if len(lTokenValue) != (nTokenRewriteEnd - nTokenRewriteStart + 1):
                    echo("Error. Text processor: number of replacements != number of tokens.")
                    return

                for i, sValue in zip(range(nTokenRewriteStart, nTokenRewriteEnd+1), lTokenValue):
                    if not sValue or sValue == "*":
                        self.lToken[i]["bToRemove"] = True
                    else:
                        if bUppercase:
                            sValue = sValue[0:1].upper() + sValue[1:]
                        self.lToken[i]["sNewValue"] = sValue

    def rewriteFromTags (self, bDebug=False):
        "rewrite the sentence, modify tokens, purge the token list"
        if bDebug:
            echo("REWRITE")
        lNewToken = []
        nMergeUntil = 0
        dTokenMerger = None
        for iToken, dToken in enumerate(self.lToken):
            bKeepToken = True
            if dToken["sType"] != "INFO":
                if nMergeUntil and iToken <= nMergeUntil:
                    dTokenMerger["sValue"] += " " * (dToken["nStart"] - dTokenMerger["nEnd"]) + dToken["sValue"]
                    dTokenMerger["nEnd"] = dToken["nEnd"]
                    if bDebug:
                        echo("  MERGED TOKEN: " + dTokenMerger["sValue"])
                    bKeepToken = False
                if "nMergeUntil" in dToken:
                    if iToken > nMergeUntil: # this token is not already merged with a previous token
                        dTokenMerger = dToken
                    if dToken["nMergeUntil"] > nMergeUntil:
                        nMergeUntil = dToken["nMergeUntil"]
                    del dToken["nMergeUntil"]
                elif "bToRemove" in dToken:
                    if bDebug:
                        echo("  REMOVED: " + dToken["sValue"])
                    self.sSentence = self.sSentence[:dToken["nStart"]] + " " * (dToken["nEnd"] - dToken["nStart"]) + self.sSentence[dToken["nEnd"]:]
                    bKeepToken = False
            #
            if bKeepToken:
                lNewToken.append(dToken)
                if "sNewValue" in dToken:
                    # rewrite token and sentence
                    if bDebug:
                        echo(dToken["sValue"] + " -> " + dToken["sNewValue"])
                    dToken["sRealValue"] = dToken["sValue"]
                    dToken["sValue"] = dToken["sNewValue"]
                    nDiffLen = len(dToken["sRealValue"]) - len(dToken["sNewValue"])
                    sNewRepl = (dToken["sNewValue"] + " " * nDiffLen)  if nDiffLen >= 0  else dToken["sNewValue"][:len(dToken["sRealValue"])]
                    self.sSentence = self.sSentence[:dToken["nStart"]] + sNewRepl + self.sSentence[dToken["nEnd"]:]
                    del dToken["sNewValue"]
            else:
                try:
                    del self.dTokenPos[dToken["nStart"]]
                except:
                    echo(self)
                    echo(dToken)
                    exit()
        if bDebug:
            echo("  TEXT REWRITED: " + self.sSentence)
        self.lToken.clear()
        self.lToken = lNewToken


#### common functions

def option (sOpt):
    "return True if option <sOpt> is active"
    return _dOptions.get(sOpt, False)


#### Functions to get text outside pattern scope

# warning: check compile_rules.py to understand how it works

_zNextWord = re.compile(r" +(\w[\w-]*)")
_zPrevWord = re.compile(r"(\w[\w-]*) +$")

def nextword (s, iStart, n):
    "get the nth word of the input string or empty string"
    m = re.match("(?: +[\\w%-]+){" + str(n-1) + "} +([\\w%-]+)", s[iStart:])
    if not m:
        return None
    return (iStart+m.start(1), m.group(1))

................................................................................
    if sNegPattern and re.search(sNegPattern, s):
        return False
    if re.search(sPattern, s):
        return True
    return False


def look_chk1 (dTokenPos, s, nOffset, sPattern, sPatternGroup1, sNegPatternGroup1=""):
    "returns True if s has pattern sPattern and m.group(1) has pattern sPatternGroup1"
    m = re.search(sPattern, s)
    if not m:
        return False
    try:
        sWord = m.group(1)
        nPos = m.start(1) + nOffset
    except:
        return False
    return morph(dTokenPos, (nPos, sWord), sPatternGroup1, sNegPatternGroup1)



#### Analyse groups for regex rules

def displayInfo (dTokenPos, tWord):
    "for debugging: retrieve info of word"
    if not tWord:
        echo("> nothing to find")
        return True
    lMorph = _oSpellChecker.getMorph(tWord[1])
    if not lMorph:
        echo("> not in dictionary")
        return True
    echo("TOKENS:", dTokenPos)
    if tWord[0] in dTokenPos and "lMorph" in dTokenPos[tWord[0]]:
        echo("DA: " + str(dTokenPos[tWord[0]]["lMorph"]))
    echo("FSA: " + str(lMorph))
    return True


def morph (dTokenPos, tWord, sPattern, sNegPattern="", bNoWord=False):
    "analyse a tuple (position, word), returns True if not sNegPattern in word morphologies and sPattern in word morphologies (disambiguation on)"
    if not tWord:
        return bNoWord
    lMorph = dTokenPos[tWord[0]]["lMorph"]  if tWord[0] in dTokenPos and "lMorph" in dTokenPos[tWord[0]]  else _oSpellChecker.getMorph(tWord[1])
    if not lMorph:
        return False
    # check negative condition
    if sNegPattern:


        if sNegPattern == "*":
            # all morph must match sPattern
            zPattern = re.compile(sPattern)
            return all(zPattern.search(sMorph)  for sMorph in lMorph)
        else:
            zNegPattern = re.compile(sNegPattern)
            if any(zNegPattern.search(sMorph)  for sMorph in lMorph):
                return False
    # search sPattern
    zPattern = re.compile(sPattern)
    return any(zPattern.search(sMorph)  for sMorph in lMorph)


def analyse (sWord, sPattern, sNegPattern=""):
    "analyse a word, returns True if not sNegPattern in word morphologies and sPattern in word morphologies (disambiguation off)"
    lMorph = _oSpellChecker.getMorph(sWord)
    if not lMorph:
        return False
    # check negative condition
    if sNegPattern:
        if sNegPattern == "*":
            zPattern = re.compile(sPattern)
            return all(zPattern.search(sMorph)  for sMorph in lMorph)
        else:
            zNegPattern = re.compile(sNegPattern)
            if any(zNegPattern.search(sMorph)  for sMorph in lMorph):
                return False
    # search sPattern
    zPattern = re.compile(sPattern)
    return any(zPattern.search(sMorph)  for sMorph in lMorph)


#### Analyse tokens for graph rules

def g_value (dToken, sValues, nLeft=None, nRight=None):
    "test if <dToken['sValue']> is in sValues (each value should be separated with |)"
    sValue = "|"+dToken["sValue"]+"|"  if nLeft is None  else "|"+dToken["sValue"][slice(nLeft, nRight)]+"|"
    if sValue in sValues:
        return True
    if dToken["sValue"][0:2].istitle(): # we test only 2 first chars, to make valid words such as "Laissez-les", "Passe-partout".
        if sValue.lower() in sValues:
            return True
    elif dToken["sValue"].isupper():
        #if sValue.lower() in sValues:
        #    return True
        sValue = "|"+sValue[1:].capitalize()
        if sValue in sValues:
            return True
    return False


def g_morph (dToken, sPattern, sNegPattern="", nLeft=None, nRight=None, bMemorizeMorph=True):
    "analyse a token, return True if <sNegPattern> not in morphologies and <sPattern> in morphologies"
    if "lMorph" in dToken:
        lMorph = dToken["lMorph"]
    else:
        if nLeft is not None:
            lMorph = _oSpellChecker.getMorph(dToken["sValue"][slice(nLeft, nRight)])
            if bMemorizeMorph:
                dToken["lMorph"] = lMorph
        else:
            lMorph = _oSpellChecker.getMorph(dToken["sValue"])
    if not lMorph:
        return False
    # check negative condition
    if sNegPattern:
        if sNegPattern == "*":
            # all morph must match sPattern
            zPattern = re.compile(sPattern)
            return all(zPattern.search(sMorph)  for sMorph in lMorph)
        else:
            zNegPattern = re.compile(sNegPattern)
            if any(zNegPattern.search(sMorph)  for sMorph in lMorph):
                return False
    # search sPattern
    zPattern = re.compile(sPattern)
    return any(zPattern.search(sMorph)  for sMorph in lMorph)


def g_analyse (dToken, sPattern, sNegPattern="", nLeft=None, nRight=None, bMemorizeMorph=True):
    "analyse a token, return True if <sNegPattern> not in morphologies and <sPattern> in morphologies (disambiguation off)"
    if nLeft is not None:
        lMorph = _oSpellChecker.getMorph(dToken["sValue"][slice(nLeft, nRight)])
        if bMemorizeMorph:
            dToken["lMorph"] = lMorph
    else:
        lMorph = _oSpellChecker.getMorph(dToken["sValue"])
    if not lMorph:
        return False
    # check negative condition
    if sNegPattern:
        if sNegPattern == "*":
            # all morph must match sPattern
            zPattern = re.compile(sPattern)
            return all(zPattern.search(sMorph)  for sMorph in lMorph)
        else:
            zNegPattern = re.compile(sNegPattern)
            if any(zNegPattern.search(sMorph)  for sMorph in lMorph):
                return False
    # search sPattern
    zPattern = re.compile(sPattern)
    return any(zPattern.search(sMorph)  for sMorph in lMorph)


def g_merged_analyse (dToken1, dToken2, cMerger, sPattern, sNegPattern="", bSetMorph=True):
    "merge two token values, return True if <sNegPattern> not in morphologies and <sPattern> in morphologies (disambiguation off)"
    lMorph = _oSpellChecker.getMorph(dToken1["sValue"] + cMerger + dToken2["sValue"])
    if not lMorph:
        return False
    # check negative condition
    if sNegPattern:
        if sNegPattern == "*":
            # all morph must match sPattern
            zPattern = re.compile(sPattern)
            bResult = all(zPattern.search(sMorph)  for sMorph in lMorph)
            if bResult and bSetMorph:
                dToken1["lMorph"] = lMorph
            return bResult
        else:
            zNegPattern = re.compile(sNegPattern)
            if any(zNegPattern.search(sMorph)  for sMorph in lMorph):
                return False
    # search sPattern
    zPattern = re.compile(sPattern)
    bResult = any(zPattern.search(sMorph)  for sMorph in lMorph)
    if bResult and bSetMorph:
        dToken1["lMorph"] = lMorph
    return bResult


def g_tag_before (dToken, dTags, sTag):
    if sTag not in dTags:
        return False
    if dToken["i"] > dTags[sTag][0]:
        return True
    return False


def g_tag_after (dToken, dTags, sTag):
    if sTag not in dTags:
        return False
    if dToken["i"] < dTags[sTag][1]:
        return True
    return False


def g_tag (dToken, sTag):
    return "aTags" in dToken and sTag in dToken["aTags"]


def g_space_between_tokens (dToken1, dToken2, nMin, nMax=None):
    nSpace = dToken2["nStart"] - dToken1["nEnd"]
    if nSpace < nMin:
        return False
    if nMax is not None and nSpace > nMax:
        return False
    return True


def g_token (lToken, i):
    if i < 0:
        return lToken[0]
    if i >= len(lToken):
        return lToken[-1]
    return lToken[i]



#### Disambiguator for regex rules

def select (dTokenPos, nPos, sWord, sPattern, lDefault=None):
    "Disambiguation: select morphologies of <sWord> matching <sPattern>"
    if not sWord:
        return True
    if nPos not in dTokenPos:
        echo("Error. There should be a token at this position: ", nPos)
        return True

    lMorph = _oSpellChecker.getMorph(sWord)
    if not lMorph or len(lMorph) == 1:
        return True


    lSelect = [ sMorph  for sMorph in lMorph  if re.search(sPattern, sMorph) ]
    if lSelect:
        if len(lSelect) != len(lMorph):
            dTokenPos[nPos]["lMorph"] = lSelect

    elif lDefault:
        dTokenPos[nPos]["lMorph"] = lDefault

    return True


def exclude (dTokenPos, nPos, sWord, sPattern, lDefault=None):
    "Disambiguation: exclude morphologies of <sWord> matching <sPattern>"
    if not sWord:
        return True
    if nPos not in dTokenPos:
        echo("Error. There should be a token at this position: ", nPos)
        return True

    lMorph = _oSpellChecker.getMorph(sWord)
    if not lMorph or len(lMorph) == 1:
        return True

    lSelect = [ sMorph  for sMorph in lMorph  if not re.search(sPattern, sMorph) ]
    if lSelect:
        if len(lSelect) != len(lMorph):
            dTokenPos[nPos]["lMorph"] = lSelect
    elif lDefault:
        dTokenPos[nPos]["lMorph"] = lDefault
    return True


def define (dTokenPos, nPos, lMorph):
    "Disambiguation: set morphologies of token at <nPos> with <lMorph>"
    if nPos not in dTokenPos:
        echo("Error. There should be a token at this position: ", nPos)
        return True
    dTokenPos[nPos]["lMorph"] = lMorph
    return True


#### Disambiguation for graph rules

def g_select (dToken, sPattern, lDefault=None):
    "select morphologies for <dToken> according to <sPattern>, always return True"
    lMorph = dToken["lMorph"]  if "lMorph" in dToken  else _oSpellChecker.getMorph(dToken["sValue"])
    if not lMorph or len(lMorph) == 1:
        if lDefault:
            dToken["lMorph"] = lDefault
            #echo("DA:", dToken["sValue"], dToken["lMorph"])
        return True
    lSelect = [ sMorph  for sMorph in lMorph  if re.search(sPattern, sMorph) ]
    if lSelect:
        if len(lSelect) != len(lMorph):
            dToken["lMorph"] = lSelect

    elif lDefault:
        dToken["lMorph"] = lDefault
    #echo("DA:", dToken["sValue"], dToken["lMorph"])
    return True


def g_exclude (dToken, sPattern, lDefault=None):
    "select morphologies for <dToken> according to <sPattern>, always return True"
    lMorph = dToken["lMorph"]  if "lMorph" in dToken  else _oSpellChecker.getMorph(dToken["sValue"])
    if not lMorph or len(lMorph) == 1:
        if lDefault:

            dToken["lMorph"] = lDefault
            #echo("DA:", dToken["sValue"], dToken["lMorph"])
        return True
    lSelect = [ sMorph  for sMorph in lMorph  if not re.search(sPattern, sMorph) ]
    if lSelect:
        if len(lSelect) != len(lMorph):
            dToken["lMorph"] = lSelect
    elif lDefault:
        dToken["lMorph"] = lDefault
    #echo("DA:", dToken["sValue"], dToken["lMorph"])
    return True



def g_define (dToken, lMorph):
    "set morphologies of <dToken>, always return True"
    dToken["lMorph"] = lMorph

    #echo("DA:", dToken["sValue"], lMorph)
    return True


def g_define_from (dToken, nLeft=None, nRight=None):
    if nLeft is not None:
        dToken["lMorph"] = _oSpellChecker.getMorph(dToken["sValue"][slice(nLeft, nRight)])
    else:
        dToken["lMorph"] = _oSpellChecker.getMorph(dToken["sValue"])
    return True



#### GRAMMAR CHECKER PLUGINS

${plugins}


#### CALLABLES FOR REGEX RULES (generated code)

${callables}


#### CALLABLES FOR GRAPH RULES (generated code)

${graph_callables}

Modified gc_core/py/lang_core/gc_options.py from [871c8d4b8f] to [c84731594a].





1
2
3

4
5
6
7
8
9

10
11
12
13
14
15
16




# generated code, do not edit

def getUI (sLang):

    if sLang in _dOptLabel:
        return _dOptLabel[sLang]
    return _dOptLabel["fr"]


def getOptions (sContext="Python"):

    if sContext in dOpt:
        return dOpt[sContext]
    return dOpt["Python"]


lStructOpt = ${lStructOpt}

>
>
>
>



>






>







1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
"""
Grammar checker default options
"""

# generated code, do not edit

def getUI (sLang):
    "returns dictionary of UI labels"
    if sLang in _dOptLabel:
        return _dOptLabel[sLang]
    return _dOptLabel["fr"]


def getOptions (sContext="Python"):
    "returns dictionary of options"
    if sContext in dOpt:
        return dOpt[sContext]
    return dOpt["Python"]


lStructOpt = ${lStructOpt}

Modified gc_core/py/lang_core/gc_rules.py from [3cf95f4a21] to [2ef08593b5].





1
2
3
4
5




# generated code, do not edit

lParagraphRules = ${paragraph_rules}

lSentenceRules = ${sentence_rules}
>
>
>
>





1
2
3
4
5
6
7
8
9
"""
Grammar checker regex rules
"""

# generated code, do not edit

lParagraphRules = ${paragraph_rules}

lSentenceRules = ${sentence_rules}

Added gc_core/py/lang_core/gc_rules_graph.py version [373592f3fb].



















>
>
>
>
>
>
>
>
>
1
2
3
4
5
6
7
8
9
"""
Grammar checker graph rules
"""

# generated code, do not edit

dAllGraph = ${rules_graphs}

dRule = ${rules_actions}

Modified gc_core/py/text.py from [133d154e72] to [137c7cc30f].

1




2
3
4
5
6
7
8
..
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
..
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
..
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
#!python3





import textwrap
from itertools import chain


def getParagraph (sText):
    "generator: returns paragraphs of text"
................................................................................
        return ""
    lGrammErrs = sorted(aGrammErrs, key=lambda d: d["nStart"])
    lSpellErrs = sorted(aSpellErrs, key=lambda d: d['nStart'])
    sText = ""
    nOffset = 0
    for sLine in wrap(sParagraph, nWidth): # textwrap.wrap(sParagraph, nWidth, drop_whitespace=False)
        sText += sLine + "\n"
        ln = len(sLine)
        sErrLine = ""
        nLenErrLine = 0
        nGrammErr = 0
        nSpellErr = 0
        for dErr in lGrammErrs:
            nStart = dErr["nStart"] - nOffset
            if nStart < ln:
                nGrammErr += 1
                if nStart >= nLenErrLine:
                    sErrLine += " " * (nStart - nLenErrLine) + "^" * (dErr["nEnd"] - dErr["nStart"])
                    nLenErrLine = len(sErrLine)
            else:
                break
        for dErr in lSpellErrs:
            nStart = dErr['nStart'] - nOffset
            if nStart < ln:
                nSpellErr += 1
                nEnd = dErr['nEnd'] - nOffset
                if nEnd > len(sErrLine):
                    sErrLine += " " * (nEnd - len(sErrLine))
                sErrLine = sErrLine[:nStart] + "°" * (nEnd - nStart) + sErrLine[nEnd:]
            else:
                break
................................................................................
            sText += sErrLine + "\n"
        if nGrammErr:
            sText += getReadableErrors(lGrammErrs[:nGrammErr], nWidth)
            del lGrammErrs[0:nGrammErr]
        if nSpellErr:
            sText += getReadableErrors(lSpellErrs[:nSpellErr], nWidth, True)
            del lSpellErrs[0:nSpellErr]
        nOffset += ln
    return sText


def getReadableErrors (lErrs, nWidth, bSpell=False):
    "Returns lErrs errors as readable errors"
    sErrors = ""
    for dErr in lErrs:
................................................................................
    return sErrors


def getReadableError (dErr, bSpell=False):
    "Returns an error dErr as a readable error"
    try:
        if bSpell:
            s = u"* {nStart}:{nEnd}  # {sValue}:".format(**dErr)
        else:
            s = u"* {nStart}:{nEnd}  # {sLineId} / {sRuleId}:\n".format(**dErr)
            s += "  " + dErr.get("sMessage", "# error : message not found")
        if dErr.get("aSuggestions", None):
            s += "\n  > Suggestions : " + " | ".join(dErr.get("aSuggestions", "# error : suggestions not found"))
        if dErr.get("URL", None):
            s += "\n  > URL: " + dErr["URL"]
        return s
    except KeyError:
        return u"* Non-compliant error: {}".format(dErr)


def createParagraphWithLines (lLine):
    "Returns a text as merged lines and a set of data about lines (line_number_y, start_x, end_x)"
    sText = ""

>
>
>
>







 







|






|








|







 







|







 







|

|
|

|

|
|







1
2
3
4
5
6
7
8
9
10
11
12
..
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
..
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
..
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
#!python3

"""
Text tools
"""

import textwrap
from itertools import chain


def getParagraph (sText):
    "generator: returns paragraphs of text"
................................................................................
        return ""
    lGrammErrs = sorted(aGrammErrs, key=lambda d: d["nStart"])
    lSpellErrs = sorted(aSpellErrs, key=lambda d: d['nStart'])
    sText = ""
    nOffset = 0
    for sLine in wrap(sParagraph, nWidth): # textwrap.wrap(sParagraph, nWidth, drop_whitespace=False)
        sText += sLine + "\n"
        nLineLen = len(sLine)
        sErrLine = ""
        nLenErrLine = 0
        nGrammErr = 0
        nSpellErr = 0
        for dErr in lGrammErrs:
            nStart = dErr["nStart"] - nOffset
            if nStart < nLineLen:
                nGrammErr += 1
                if nStart >= nLenErrLine:
                    sErrLine += " " * (nStart - nLenErrLine) + "^" * (dErr["nEnd"] - dErr["nStart"])
                    nLenErrLine = len(sErrLine)
            else:
                break
        for dErr in lSpellErrs:
            nStart = dErr['nStart'] - nOffset
            if nStart < nLineLen:
                nSpellErr += 1
                nEnd = dErr['nEnd'] - nOffset
                if nEnd > len(sErrLine):
                    sErrLine += " " * (nEnd - len(sErrLine))
                sErrLine = sErrLine[:nStart] + "°" * (nEnd - nStart) + sErrLine[nEnd:]
            else:
                break
................................................................................
            sText += sErrLine + "\n"
        if nGrammErr:
            sText += getReadableErrors(lGrammErrs[:nGrammErr], nWidth)
            del lGrammErrs[0:nGrammErr]
        if nSpellErr:
            sText += getReadableErrors(lSpellErrs[:nSpellErr], nWidth, True)
            del lSpellErrs[0:nSpellErr]
        nOffset += nLineLen
    return sText


def getReadableErrors (lErrs, nWidth, bSpell=False):
    "Returns lErrs errors as readable errors"
    sErrors = ""
    for dErr in lErrs:
................................................................................
    return sErrors


def getReadableError (dErr, bSpell=False):
    "Returns an error dErr as a readable error"
    try:
        if bSpell:
            sText = u"* {nStart}:{nEnd}  # {sValue}:".format(**dErr)
        else:
            sText = u"* {nStart}:{nEnd}  # {sLineId} / {sRuleId}:\n".format(**dErr)
            sText += "  " + dErr.get("sMessage", "# error : message not found")
        if dErr.get("aSuggestions", None):
            sText += "\n  > Suggestions : " + " | ".join(dErr.get("aSuggestions", "# error : suggestions not found"))
        if dErr.get("URL", None):
            sText += "\n  > URL: " + dErr["URL"]
        return sText
    except KeyError:
        return u"* Non-compliant error: {}".format(dErr)


def createParagraphWithLines (lLine):
    "Returns a text as merged lines and a set of data about lines (line_number_y, start_x, end_x)"
    sText = ""

Added gc_lang/fr/French_language.txt version [e372ce9fb7].





























































































































































































































































































































































































































































































































































































































































































































































































>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
# NOTES SUR LA LANGUE FRANÇAISE

## CE QUI ENTOURE UN VERBE

    PRONOMS (avant)
        COD         COI
        le / l’
        la / l’
        les
        en
        me / m’     me / m’
        te / t’     te / t’
        se / s’     lui
        nous        nous
        vous        nous
        se / s’     leur
                    y

    SOMME
        [le|la|l’|les|en|me|m’|te|t’|se|s’|nous|vous|lui|leur|y]

    ADVERBE DE NÉGATION (avant)
        ne / n’

    COMBINAISONS VALIDES
        ?[ne|n’]¿   [me|te|se]      [le|la|l’|les]
        ?[ne|n’]¿   [m’|t’|s’]      [le|la|l’|les|en|y]
        ?[ne|n’]¿   [le|la]         [lui|leur]
        ?[ne|n’]¿   [l’|les]        [lui|leur|en|y]
        ?[ne|n’]¿   [lui|leur]      en
        ?[ne|n’]¿   [nous|vous]     [le|la|l’|les|en|y]
        ne          [le|la|l’|les|me|m’|te|t’|se|s’|nous|vous|lui|leur]
        n’          [en|y]

    RÉSUMÉ & SIMPLIFICATION
        [ne|n’|le|la|l’|les|en|me|m’|te|t’|se|s’|nous|vous|lui|leur|y]
        ?[ne|n’]¿   [le|la|l’|les|en|me|m’|te|t’|se|s’|nous|vous|lui|leur|y]
        ?[ne|n’]¿   [me|m’|te|t’|se|s’|nous|vous]   [le|la|l’|les|en|y]
        ?[ne|n’]¿   [le|la|l’|les]                  [lui|leur|en|y]
        ?[ne|n’]¿   [lui|leur]                      en

    ADVERBE DE NÉGATION (après)
        guère
        jamais
        pas
        plus
        point
        que / qu’
        rien

    PRONOMS À L’IMPÉRATIF
        APRÈS
            -moi
            -toi
            -lui
            -leur
            -nous
            -vous
            -le
            -la
            -les
            -en
            -y

        AVANT
            Uniquement les combinaisons avec l’adverbe de négation [ne|n’]


## DÉTERMINANTS

    SINGULIER               PLURIEL
    le / la / l’            les
    ledit / ladite          lesdits / lesdites
    un / une                des
    du / de la              des
    dudit / de ladite       desdits / desdites
    de                      de
    ce / cet / cette        ces
    icelui / icelle         iceux / icelles
    mon / ma                mes
    ton / ta                tes
    son / sa                ses
    votre                   nos
    notre                   vos
    leur                    leurs
    quel / quelle           quels / quelles
    quelque                 quelques
    tout / toute            tous / toutes
    chaque
    aucun / aucune
    nul / nulle
                            plusieurs
                            certains / certaines
                            divers / diverses

    DÉTERMINANT & PRÉPOSITION
    au / à la               aux
    audit / à ladite        auxdits / auxdites


## CONJONCTIONS

    DE COORDINATION         DE SUBORDINATION
    c’est-à-dire            afin que            pendant que
    c.-à-d.                 après que           pour que
    car                     attendu que         pourvu que
    donc                    avant que           puisque
    et / &                  bien que            quand
    mais                    comme               que
    ni                      depuis que          quoique
    or                      dès que             sans que
    ou                      dès lors que        sauf que
    partant                 excepté que         selon que
    puis                    lorsque             si
    sinon                   lors que            tandis que
    soit                    malgré que          tant que
                            parce que


## PRÉPOSITIONS

    VERBALES UNIQUEMENT
        afin de

    NOMINALES ET VERBALES
        à
        entre
        excepté
        outre
        par
        pour
        sans
        sauf

    PRÉPOSITIONS ET DÉTERMINANTS
        au
        aux
        audit
        auxdits
        auxdites

    NOMINALES
        à l’instar de               devers                      par-dessus  (adv)
        à mi-distance de            dixit                       par-devant  (adv)
        après                       durant                      par-devers
        attendu                     dès                         parmi
        au-dedans   (adv)           en                          passé
        au-dehors   (adv)           endéans                     pendant
        au-delà     (adv)           envers                      pour
        au-dessous  (adv)           ès                          quant à/au/à la/aux
        au-dessus   (adv)           excepté                     revoici
        au-devant   (adv)           face à                      revoilà
        auprès de                   fors                        sauf
        autour de                   grâce à                     sans
        av                          hormis                      selon
        avant                       hors                        sous
        avec                        jusque                      suivant
        chez                        jusques                     sur
        concernant                  lez                         tandis      (adv)
        contre                      lors de                     vers
        courant (+mois)             lès                         versus
        dans                        malgré                      via
        depuis                      moins       (adv)           vis-à-vis
        derrière                    nonobstant  (adv)           voici
        dessous     (adv)           par-delà                    voilà
        dessus      (adv)           par-derrière  (adv)         vs
        devant      (adv)           par-dessous   (adv)         vu


## PRONOMS

    PRONOMS PERSONNELS SUJETS
        je                  moi-même                                mézigue
        tu                  toi-même                                tézigue
        il / elle           lui / lui-même / elle-même              césigue / sézigue
        on
        nous                nous-même / nous-mêmes                  noszigues
        vous                vous-même / vous-mêmes                  voszigues
        ils / elles         eux / eux-mêmes / elles-mêmes           leurszigues

    PRONOMS PERSONNELS OBJETS
        moi                 moi-même                                mézigue
        toi                 toi-même                                tézigue
        lui / elle          lui-même  / elle-même                   césigue / sézigue
        soi                 soi-même
        nous                nous-même / nous-mêmes                  noszigues
        vous                vous-même / vous-mêmes                  voszigues
        eux / elles         eux / eux-mêmes / elles-mêmes           leurszigues

    PRONOMS NÉGATIFS (SUJETS & OBJETS)
        aucun
        aucune
        dégun
        nul
        personne
        rien

    PRONOMS OBJETS PRÉVERBES
        la      COD
        le      COD
        les     COD
        l’      COD
        leur    COI
        lui     COI
        me      COD/COI
        te      COD/COI
        se      COD/COI
        nous    COD/COI
        vous    COD/COI
        y       COI (proadv)
        en      COD (proadv)

    PRONOMS DÉMONSTRATIFS (SUJETS ET OBJETS)
        çuilà           propersuj properobj 3pe mas sg
        ça              prodem mas sg
        ceci            prodem mas sg
        cela            prodem mas sg
        celle qui       prodem fem sg
        celles qui      prodem fem pl
        celle-ci        prodem fem sg
        celle-là        prodem fem sg
        celles-ci       prodem fem pl
        celles-là       prodem fem pl
        celui qui       prodem mas sg
        celui-ci        prodem mas sg
        celui-là        prodem mas sg
        ceux qui        prodem mas pl
        ceux-ci         prodem mas pl
        ceux-là         prodem mas pl

        icelle          detdem prodem fem sg
        icelles         detdem prodem fem pl
        icelui          detdem prodem mas sg
        iceux           detdem prodem mas pl

    PRONOMS DÉMONSTRATIFS (SUJETS)
        ce

    PRONOMS DÉMONSTRATIFS (OBJETS)
        ci              (adv)

    PRONOMS RELATIFS
        auquel          proint prorel mas sg
        auxquelles      proint prorel fem pl
        auxquels        proint prorel mas pl
        desquelles      proint prorel fem pl
        desquels        proint prorel mas pl
        dont            prorel
        duquel          proint prorel mas sg
        laquelle        proint prorel fem sg
        lequel          proint prorel mas sg
        lesquelles      proint prorel fem pl
        lesquels        proint prorel mas pl
        où              advint prorel
        qué             proint prorel
        qui             proint prorel
        que             proint prorel
        quid            proint
        quoi            proint prorel

        autre           proind
        autrui          proind
        quiconque       proind prorel
        certaine        detind proind
        chacun          proind mas sg
        chacune         proind fem sg
        d’aucuns        proind mas pl
        grand-chose     proind
        n’importe quoi  proind
        n’importe qui   proind
        plupart         proind epi pl
        quelques-unes   proind fem pl
        quelques-uns    proind mas pl
        quelqu’un       proind mas sg
        quelqu’une      proind fem sg
        telle           proind

## MOTS GRAMMATICAUX CONFUS

    a
    autour
    cela
    certain·e·s
    contre
    dans
    derrière
    durant
    entre
    excepté
    face
    la
    leur
    lui
    mais
    me
    or
    outre
    personne
    pendant
    plus
    point
    pourvu
    puis
    rien
    sauf
    soit
    son
    sous
    sur
    ton
    tout
    tu
    un
    une
    vers
    y

## VERBES À TRAITER EN PARTICULIER

    # auxiliaire
    être
    avoir
    aller

    # verbes modaux ou quasi-modaux
    adorer
    aimer
    croire
    devoir
    espérer
    faire
    falloir
    imaginer
    laisser
    partir
    penser
    pouvoir
    savoir
    venir
    vouloir

    # verbes d’état
    apparaître
    avoir l’air
    demeurer
    devenir
    paraître
    redevenir
    rester
    sembler

    # verbes d’action usuels
    commencer
    donner
    finir
    prendre
    trouver
    voir

    # dialogue
    - aboyer, accepter, acclamer, accorder, accuser, achever, acquiescer, adhérer, adjurer, admettre, admonester, affirmer, affranchir, ajouter, alléguer, anathématiser, annoncer, annoter, apostropher, appeler, applaudir, apprendre, approuver, approuver, arguer, argumenter, arrêter, articuler, assener, assurer, attester, avancer, avertir, aviser, avouer, ânonner
    - babiller, badiner, bafouer, bafouiller, balbutier, baragouiner, bavarder, beugler, blaguer, blâmer, bougonner, bourdonner, bourrasser, brailler, bramer, bredouiller, bégayer, bénir
    - cafouiller, capituler, certifier, chanter, chantonner, choisir, chuchoter, clamer, combattre, commander, commenter, compatir, compléter, composer, conclure, concéder, confesser, confier, confirmer, congratuler, considérer, conspuer, conter, contester, contredire, converser, couiner, couper, cracher, crachoter, crier, critiquer, croire, crépiter, céder
    - déclamer, demander, deviner, deviser, dialoguer, dire, discourir, discréditer, discuter, disserter, dissimuler, distinguer, divulguer, douter, débiter, décider, déclamer, déclarer, décrire, dédouaner, déduire, défendre, dégoiser, démentir, démontrer, dénoncer, déplorer, détailler, dévoiler
    - emporter, encenser, enchérir, encourager, enflammer, enguirlander, enquérir, entamer, entonner, ergoter, essayer, estimer, exagérer, examiner, exhorter, exiger, expliquer, exploser, exposer, exprimer, exulter, éclater, égosiller, égrener, éjaculer, éluder, émettre, énoncer, énumérer, épeler, établir, éternuer, étonner
    - faire fanfaronner, faire miroiter, faire remarquer, finir, flatter, formuler, fustiger, féliciter
    - garantir, geindre, glisser, glorifier, gloser, glousser, gouailler, grincer, grognasser, grogner, grommeler, gronder, gueuler, gémir
    - haleter, haranguer, hasarder, honnir, huer, hurler, héler, hésiter
    - imaginer, implorer, indiquer, infirmer, informer, injurier, innocenter, insinuer, insister, insister, insulter, intercéder, interdire, interroger, interrompre, intervenir, intimer, inventer, inventorier, invoquer, ironiser
    - jauger, jubiler, juger, jurer, justifier
    - lancer, lire, lister, louer, lâcher
    - marmonner, maugréer, menacer, mentir, mettre en garde, minauder, minimiser, monologuer, murmurer, médire, mépriser
    - narguer, narrer, nasiller, nier, négocier
    - objecter, objurguer, obliger, observer, obtempérer, opiner, ordonner, outrager
    - palabrer, papoter, parlementer, parler, penser, permettre, persifler, pester, philosopher, piaffer, pilorier, plaider, plaisanter, plastronner, pleurer, pleurnicher, polémiquer, pontifier, postillonner, pouffer, poursuivre, prier, proférer, prohiber, promettre, prophétiser, proposer, protester, prouver, préciser, préférer, présenter, prétendre, prôner, psalmodier, pérorer
    - questionner, quémander, quêter
    - rabâcher, raconter, radoter, railler, rajouter, rappeler, rapporter, rassurer, raviser, réciter, reconnaître, rectifier, redire, refuser, regretter, relater, remarquer, renauder, renchérir, renseigner, renâcler, repartir, reprendre, requérir, ressasser, revendiquer, ricaner, riposter, rire, risquer, ronchonner, ronronner, rouscailler, rouspéter, rugir, râler, réaliser, récapituler, réciter, réclamer, récuser, réfuter, répliquer, répliquer, répondre, répondre, réprimander, réprouver, répéter, résister, résumer, rétorquer, réviser, révéler
    - saluer, scruter, se gargariser, se moquer, se plaindre, se réjouir, se souvenir, seriner, sermonner, siffler, signaler, signifier, soliloquer, solliciter, sommer, souffler, souligner, soupçonner, sourire, souscrire, soutenir, stigmatiser, suggérer, supplier, supputer, susurrer, sélectionner, s’adresser, s’esclaffer, s’exclamer, s’excuser, s’impatienter, s’incliner, s’instruire, s’insurger, s’interloquer, s’intéresser, s’offusquer, s’émerveiller, s’étouffer, s’étrangler
    - taquiner, tempérer, tempêter, tenter, terminer, tonitruer, tonner, traduire
    - vanter, vanter, vilipender, vitupérer, vociférer, vomir, vérifier
    - zozoter, zézayer

Modified gc_lang/fr/config.ini from [2149787618] to [aee238173d].

2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
..
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
lang = fr
lang_name = French
locales = fr_FR fr_BE fr_CA fr_CH fr_LU fr_BF fr_BJ fr_CD fr_CI fr_CM fr_MA fr_ML fr_MU fr_NE fr_RE fr_SN fr_TG
country_default = FR
name = Grammalecte
implname = grammalecte
# always use 3 numbers for version: x.y.z
version = 0.6.5
author = Olivier R.
provider = Dicollecte
link = http://grammalecte.net
description = Correcteur grammatical pour le français.
extras = README_fr.txt
logo = logo.png

................................................................................
# Finite state automaton compression: 1, 2 (experimental) or 3 (experimental)
fsa_method = 1
# stemming method: S for suffixes only, A for prefixes and suffixes
stemming_method = S

# LibreOffice
unopkg = C:/Program Files/LibreOffice/program/unopkg.com
oxt_version = 6.3
oxt_identifier = French.linguistic.resources.from.Dicollecte.by.OlivierR

# Firefox
fx_identifier = French-GC@grammalecte.net
fx_name = Grammalecte [fr]

win_fx_dev_path = C:\Program Files\Firefox Developer Edition\firefox.exe







|







 







|







2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
..
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
lang = fr
lang_name = French
locales = fr_FR fr_BE fr_CA fr_CH fr_LU fr_BF fr_BJ fr_CD fr_CI fr_CM fr_MA fr_ML fr_MU fr_NE fr_RE fr_SN fr_TG
country_default = FR
name = Grammalecte
implname = grammalecte
# always use 3 numbers for version: x.y.z
version = 1.0
author = Olivier R.
provider = Dicollecte
link = http://grammalecte.net
description = Correcteur grammatical pour le français.
extras = README_fr.txt
logo = logo.png

................................................................................
# Finite state automaton compression: 1, 2 (experimental) or 3 (experimental)
fsa_method = 1
# stemming method: S for suffixes only, A for prefixes and suffixes
stemming_method = S

# LibreOffice
unopkg = C:/Program Files/LibreOffice/program/unopkg.com
oxt_version = 7.0
oxt_identifier = French.linguistic.resources.from.Dicollecte.by.OlivierR

# Firefox
fx_identifier = French-GC@grammalecte.net
fx_name = Grammalecte [fr]

win_fx_dev_path = C:\Program Files\Firefox Developer Edition\firefox.exe

Modified gc_lang/fr/data/dictConj.txt from [f974707fe8] to [bff027e827].

20671
20672
20673
20674
20675
20676
20677
20678
20679
20680
20681
20682
20683
20684
20685
......
117064
117065
117066
117067
117068
117069
117070






















































117071
117072
117073
117074
117075
117076
117077
......
140959
140960
140961
140962
140963
140964
140965






















































140966
140967
140968
140969
140970
140971
140972
......
191222
191223
191224
191225
191226
191227
191228
191229
191230
191231
191232
191233
191234
191235
191236
191237
191238
191239
191240
191241
191242
191243
191244
191245
191246
191247
191248
191249
191250
191251
191252
191253
191254
191255
191256
191257
191258
191259
191260
191261
191262
191263
191264
191265
191266
191267
191268
191269
191270
191271
191272
191273
......
191421
191422
191423
191424
191425
191426
191427
191428
191429
191430
191431
191432
191433
191434
191435
191436
191437
191438
191439
191440
191441
191442
191443
191444
191445
191446
191447
191448
191449
191450
191451
191452
191453
191454
191455
191456
191457
191458
191459
191460
191461
191462
191463
191464
191465
191466
191467
191468
191469
191470
191471
191472
191473
191474
191475
191476
191477
191478
191479
191480
191481
191482
191483
191484
191485
191486
191487
......
192156
192157
192158
192159
192160
192161
192162
192163
192164
192165
192166
192167
192168
192169
192170
192171
192172
192173
192174
192175
192176
192177
192178
192179
192180
192181
192182
192183
192184
192185
192186
192187
192188
192189
192190
192191
192192
192193
192194
192195
192196
192197
192198
192199
192200
192201
192202
192203
192204
192205
192206
192207
......
192346
192347
192348
192349
192350
192351
192352
192353
192354
192355
192356
192357
192358
192359
192360
......
193520
193521
193522
193523
193524
193525
193526
193527
193528
193529
193530
193531
193532
193533
193534
193535
193536
......
193575
193576
193577
193578
193579
193580
193581




































































































































193582
193583
193584
193585
193586
193587
193588
193589
193590
193591
193592
193593
193594
193595
193596
193597
193598
193599
193600
193601
193602
193603
193604
193605
193606
193607
193608
193609
193610
193611
193612
193613
193614
193615
193616
193617
193618
193619
193620
193621
193622
193623
193624
193625
193626
193627
193628
193629
193630
193631
193632
193633
193634
193635
193636
193637
193638
193639
......
370031
370032
370033
370034
370035
370036
370037






















































370038
370039
370040
370041
370042
370043
370044
......
382419
382420
382421
382422
382423
382424
382425
382426
382427
382428
382429
382430
382431
382432
382433
382434
382435
......
382746
382747
382748
382749
382750
382751
382752
382753
382754
382755
382756
382757
382758
382759
382760
382761
382762
......
383289
383290
383291
383292
383293
383294
383295
383296
383297
383298
383299
383300
383301
383302
383303
383304
383305
383306
383307
383308
383309
383310
383311
383312
......
392933
392934
392935
392936
392937
392938
392939
392940
392941
392942
392943
392944
392945
392946
392947
......
409407
409408
409409
409410
409411
409412
409413
















409414
409415
409416
409417
409418
409419
409420
......
422133
422134
422135
422136
422137
422138
422139






















































422140
422141
422142
422143
422144
422145
422146
_	simp 2pl	appontassiez
_	simp 3pl	appontassent
_	impe 2sg	apponte
_	impe 1pl	appontons
_	impe 2pl	appontez
_	ppas epi inv	apponté
$
apporter	1_it____a
_	infi	apporter
_	ppre	apportant
_	ipre 1sg	apporte
_	ipre 3sg	apporte
_	spre 1sg	apporte
_	spre 3sg	apporte
_	ipre 1isg	apportè
................................................................................
_	impe 1pl	dégénérons
_	impe 2pl	dégénérez
_	ppas mas sg	dégénéré
_	ppas mas pl	dégénérés
_	ppas fem sg	dégénérée
_	ppas fem pl	dégénérées
$






















































dégermer	1__t___zz
_	infi	dégermer
_	ppre	dégermant
_	ipre 1sg	dégerme
_	ipre 3sg	dégerme
_	spre 1sg	dégerme
_	spre 3sg	dégerme
................................................................................
_	impe 1pl	désentravons
_	impe 2pl	désentravez
_	ppas mas sg	désentravé
_	ppas mas pl	désentravés
_	ppas fem sg	désentravée
_	ppas fem pl	désentravées
$






















































désenvaser	1__t___zz
_	infi	désenvaser
_	ppre	désenvasant
_	ipre 1sg	désenvase
_	ipre 3sg	désenvase
_	spre 1sg	désenvase
_	spre 3sg	désenvase
................................................................................
_	impe 1pl	entraimons
_	impe 2pl	entraimez
_	ppas mas sg	entraimé
_	ppas mas pl	entraimés
_	ppas fem sg	entraimée
_	ppas fem pl	entraimées
$
entr'aimer	1____r_e_
_	infi	entr'aimer
_	ppre	entr'aimant
_	ipre 3sg	entr'aime
_	spre 3sg	entr'aime
_	ipre 1pl	entr'aimons
_	ipre 2pl	entr'aimez
_	ipre 3pl	entr'aiment
_	spre 3pl	entr'aiment
_	iimp 3sg	entr'aimait
_	iimp 1pl	entr'aimions
_	spre 1pl	entr'aimions
_	iimp 2pl	entr'aimiez
_	spre 2pl	entr'aimiez
_	iimp 3pl	entr'aimaient
_	ipsi 3sg	entr'aima
_	ipsi 1pl	entr'aimâmes
_	ipsi 2pl	entr'aimâtes
_	ipsi 3pl	entr'aimèrent
_	ifut 3sg	entr'aimera
_	ifut 1pl	entr'aimerons
_	ifut 2pl	entr'aimerez
_	ifut 3pl	entr'aimeront
_	cond 3sg	entr'aimerait
_	cond 1pl	entr'aimerions
_	cond 2pl	entr'aimeriez
_	cond 3pl	entr'aimeraient
_	simp 3sg	entr'aimât
_	simp 1pl	entr'aimassions
_	simp 2pl	entr'aimassiez
_	simp 3pl	entr'aimassent
_	impe 1pl	entr'aimons
_	impe 2pl	entr'aimez
_	ppas mas sg	entr'aimé
_	ppas mas pl	entr'aimés
_	ppas fem sg	entr'aimée
_	ppas fem pl	entr'aimées
$
entrainer	1__t_q__a
_	infi	entrainer
_	ppre	entrainant
_	ipre 1sg	entraine
_	ipre 3sg	entraine
_	spre 1sg	entraine
_	spre 3sg	entraine
................................................................................
_	simp 1pl	entraperçussions
_	simp 2pl	entraperçussiez
_	simp 3pl	entraperçussent
_	impe 2sg	entraperçois
_	impe 1pl	entrapercevons
_	impe 2pl	entrapercevez
$
entr'apercevoir	3__t_q_zz
_	infi	entr'apercevoir
_	ppre	entr'apercevant
_	ppas mas sg	entr'aperçu
_	ppas mas pl	entr'aperçus
_	ppas fem sg	entr'aperçue
_	ppas fem pl	entr'aperçues
_	ipre 1sg	entr'aperçois
_	ipre 2sg	entr'aperçois
_	ipre 3sg	entr'aperçoit
_	ipre 1pl	entr'apercevons
_	ipre 2pl	entr'apercevez
_	ipre 3pl	entr'aperçoivent
_	spre 3pl	entr'aperçoivent
_	iimp 1sg	entr'apercevais
_	iimp 2sg	entr'apercevais
_	iimp 3sg	entr'apercevait
_	iimp 1pl	entr'apercevions
_	spre 1pl	entr'apercevions
_	iimp 2pl	entr'aperceviez
_	spre 2pl	entr'aperceviez
_	iimp 3pl	entr'apercevaient
_	ipsi 1sg	entr'aperçus
_	ipsi 2sg	entr'aperçus
_	ipsi 3sg	entr'aperçut
_	ipsi 1pl	entr'aperçûmes
_	ipsi 2pl	entr'aperçûtes
_	ipsi 3pl	entr'aperçurent
_	ifut 1sg	entr'apercevrai
_	ifut 2sg	entr'apercevras
_	ifut 3sg	entr'apercevra
_	ifut 1pl	entr'apercevrons
_	ifut 2pl	entr'apercevrez
_	ifut 3pl	entr'apercevront
_	cond 1sg	entr'apercevrais
_	cond 2sg	entr'apercevrais
_	cond 3sg	entr'apercevrait
_	cond 1pl	entr'apercevrions
_	cond 2pl	entr'apercevriez
_	cond 3pl	entr'apercevraient
_	spre 1sg	entr'aperçoive
_	spre 3sg	entr'aperçoive
_	spre 2sg	entr'aperçoives
_	simp 1sg	entr'aperçusse
_	simp 2sg	entr'aperçusses
_	simp 3sg	entr'aperçût
_	simp 1pl	entr'aperçussions
_	simp 2pl	entr'aperçussiez
_	simp 3pl	entr'aperçussent
_	impe 2sg	entr'aperçois
_	impe 1pl	entr'apercevons
_	impe 2pl	entr'apercevez
$
entraver	1__t___zz
_	infi	entraver
_	ppre	entravant
_	ipre 1sg	entrave
_	ipre 3sg	entrave
_	spre 1sg	entrave
_	spre 3sg	entrave
................................................................................
_	impe 1pl	entrégorgeons
_	impe 2pl	entrégorgez
_	ppas mas sg	entrégorgé
_	ppas mas pl	entrégorgés
_	ppas fem sg	entrégorgée
_	ppas fem pl	entrégorgées
$
entr'égorger	1____r_e_
_	infi	entr'égorger
_	ppre	entr'égorgeant
_	ipre 3sg	entr'égorge
_	spre 3sg	entr'égorge
_	ipre 1pl	entr'égorgeons
_	ipre 2pl	entr'égorgez
_	ipre 3pl	entr'égorgent
_	spre 3pl	entr'égorgent
_	iimp 3sg	entr'égorgeait
_	iimp 1pl	entr'égorgions
_	spre 1pl	entr'égorgions
_	iimp 2pl	entr'égorgiez
_	spre 2pl	entr'égorgiez
_	iimp 3pl	entr'égorgeaient
_	ipsi 3sg	entr'égorgea
_	ipsi 1pl	entr'égorgeâmes
_	ipsi 2pl	entr'égorgeâtes
_	ipsi 3pl	entr'égorgèrent
_	ifut 3sg	entr'égorgera
_	ifut 1pl	entr'égorgerons
_	ifut 2pl	entr'égorgerez
_	ifut 3pl	entr'égorgeront
_	cond 3sg	entr'égorgerait
_	cond 1pl	entr'égorgerions
_	cond 2pl	entr'égorgeriez
_	cond 3pl	entr'égorgeraient
_	simp 3sg	entr'égorgeât
_	simp 1pl	entr'égorgeassions
_	simp 2pl	entr'égorgeassiez
_	simp 3pl	entr'égorgeassent
_	impe 1pl	entr'égorgeons
_	impe 2pl	entr'égorgez
_	ppas mas sg	entr'égorgé
_	ppas mas pl	entr'égorgés
_	ppas fem sg	entr'égorgée
_	ppas fem pl	entr'égorgées
$
entrehaïr	2____r_e_
_	infi	entrehaïr
_	ppre	entrehaïssant
_	ppas mas sg	entrehaï
_	ppas mas pl	entrehaïs
_	ppas fem sg	entrehaïe
_	ppas fem pl	entrehaïes
................................................................................
_	impe 1pl	entre-heurtons
_	impe 2pl	entre-heurtez
_	ppas mas sg	entre-heurté
_	ppas mas pl	entre-heurtés
_	ppas fem sg	entre-heurtée
_	ppas fem pl	entre-heurtées
$
entrelacer	1__t_q_zz
_	infi	entrelacer
_	ppre	entrelaçant
_	ipre 1sg	entrelace
_	ipre 3sg	entrelace
_	spre 1sg	entrelace
_	spre 3sg	entrelace
_	ipre 1isg	entrelacè
................................................................................
_	impe 1pl	entrevoûtons
_	impe 2pl	entrevoûtez
_	ppas mas sg	entrevoûté
_	ppas mas pl	entrevoûtés
_	ppas fem sg	entrevoûtée
_	ppas fem pl	entrevoûtées
$
entr'hiverner	1__t___zz
_	infi	entr'hiverner
$
entrouvrir	3__t_q_zz
_	infi	entrouvrir
_	ppre	entrouvrant
_	ppas mas sg	entrouvert
_	ppas mas pl	entrouverts
_	ppas fem sg	entrouverte
_	ppas fem pl	entrouvertes
................................................................................
_	simp 1pl	entrouvrissions
_	simp 2pl	entrouvrissiez
_	simp 3pl	entrouvrissent
_	impe 2sg	entrouvre
_	impe 1pl	entrouvrons
_	impe 2pl	entrouvrez
$




































































































































entr'ouvrir	3__t_q_zz
_	infi	entr'ouvrir
_	ppre	entr'ouvrant
_	ppas mas sg	entr'ouvert
_	ppas mas pl	entr'ouverts
_	ppas fem sg	entr'ouverte
_	ppas fem pl	entr'ouvertes
_	ipre 1sg	entr'ouvre
_	ipre 3sg	entr'ouvre
_	spre 1sg	entr'ouvre
_	spre 3sg	entr'ouvre
_	ipre 2sg	entr'ouvres
_	ipre 1pl	entr'ouvrons
_	ipre 2pl	entr'ouvrez
_	ipre 3pl	entr'ouvrent
_	spre 3pl	entr'ouvrent
_	iimp 1sg	entr'ouvrais
_	iimp 2sg	entr'ouvrais
_	iimp 3sg	entr'ouvrait
_	iimp 1pl	entr'ouvrions
_	spre 1pl	entr'ouvrions
_	iimp 2pl	entr'ouvriez
_	spre 2pl	entr'ouvriez
_	iimp 3pl	entr'ouvraient
_	ipsi 1sg	entr'ouvris
_	ipsi 2sg	entr'ouvris
_	ipsi 3sg	entr'ouvrit
_	ipsi 1pl	entr'ouvrîmes
_	ipsi 2pl	entr'ouvrîtes
_	ipsi 3pl	entr'ouvrirent
_	ifut 1sg	entr'ouvrirai
_	ifut 2sg	entr'ouvriras
_	ifut 3sg	entr'ouvrira
_	ifut 1pl	entr'ouvrirons
_	ifut 2pl	entr'ouvrirez
_	ifut 3pl	entr'ouvriront
_	cond 1sg	entr'ouvrirais
_	cond 2sg	entr'ouvrirais
_	cond 3sg	entr'ouvrirait
_	cond 1pl	entr'ouvririons
_	cond 2pl	entr'ouvririez
_	cond 3pl	entr'ouvriraient
_	simp 1sg	entr'ouvrisse
_	simp 2sg	entr'ouvrisses
_	simp 3sg	entr'ouvrît
_	simp 1pl	entr'ouvrissions
_	simp 2pl	entr'ouvrissiez
_	simp 3pl	entr'ouvrissent
_	impe 2sg	entr'ouvre
_	impe 1pl	entr'ouvrons
_	impe 2pl	entr'ouvrez
$
entuber	1__t___zz
_	infi	entuber
_	ppre	entubant
_	ipre 1sg	entube
_	ipre 3sg	entube
_	spre 1sg	entube
................................................................................
_	impe 1pl	resocialisons
_	impe 2pl	resocialisez
_	ppas mas sg	resocialisé
_	ppas mas pl	resocialisés
_	ppas fem sg	resocialisée
_	ppas fem pl	resocialisées
$






















































résonner	1_i____zz
_	infi	résonner
_	ppre	résonnant
_	ipre 1sg	résonne
_	ipre 3sg	résonne
_	spre 1sg	résonne
_	spre 3sg	résonne
................................................................................
_	impe 1pl	rythmons
_	impe 2pl	rythmez
_	ppas mas sg	rythmé
_	ppas mas pl	rythmés
_	ppas fem sg	rythmée
_	ppas fem pl	rythmées
$
s'abader	1____p_e_
_	infi	s'abader
$
sabler	1_it___zz
_	infi	sabler
_	ppre	sablant
_	ipre 1sg	sable
_	ipre 3sg	sable
_	spre 1sg	sable
_	spre 3sg	sable
................................................................................
_	impe 1pl	sabrons
_	impe 2pl	sabrez
_	ppas mas sg	sabré
_	ppas mas pl	sabrés
_	ppas fem sg	sabrée
_	ppas fem pl	sabrées
$
s'abriller	1____p_e_
_	infi	s'abriller
$
sacagner	1__t___zz
_	infi	sacagner
_	ppre	sacagnant
_	ipre 1sg	sacagne
_	ipre 3sg	sacagne
_	spre 1sg	sacagne
_	spre 3sg	sacagne
................................................................................
_	impe 1pl	safranons
_	impe 2pl	safranez
_	ppas mas sg	safrané
_	ppas mas pl	safranés
_	ppas fem sg	safranée
_	ppas fem pl	safranées
$
s'agir	2____p_e_
_	infi	s'agir
_	ppre	s'agissant
_	ipre 3sg	s'agit
_	iimp 3sg	s'agissait
_	ifut 3sg	s'agira
_	cond 3sg	s'agirait
_	spre 3sg	s'agisse
_	simp 3sg	s'agît
$
saietter	1__t___zz
_	infi	saietter
_	ppre	saiettant
_	ipre 1sg	saiette
_	ipre 3sg	saiette
_	spre 1sg	saiette
_	spre 3sg	saiette
................................................................................
_	impe 1pl	solidarisons
_	impe 2pl	solidarisez
_	ppas mas sg	solidarisé
_	ppas mas pl	solidarisés
_	ppas fem sg	solidarisée
_	ppas fem pl	solidarisées
$
solidifier	1__t_q_zz
_	infi	solidifier
_	ppre	solidifiant
_	ipre 1sg	solidifie
_	ipre 3sg	solidifie
_	spre 1sg	solidifie
_	spre 3sg	solidifie
_	ipre 1isg	solidifiè
................................................................................
_	impe 1pl	systématisons
_	impe 2pl	systématisez
_	ppas mas sg	systématisé
_	ppas mas pl	systématisés
_	ppas fem sg	systématisée
_	ppas fem pl	systématisées
$
















tabasser	1__t_q_zz
_	infi	tabasser
_	ppre	tabassant
_	ipre 1sg	tabasse
_	ipre 3sg	tabasse
_	spre 1sg	tabasse
_	spre 3sg	tabasse
................................................................................
_	impe 1pl	transitons
_	impe 2pl	transitez
_	ppas mas sg	transité
_	ppas mas pl	transités
_	ppas fem sg	transitée
_	ppas fem pl	transitées
$






















































translater	1__t___zz
_	infi	translater
_	ppre	translatant
_	ipre 1sg	translate
_	ipre 3sg	translate
_	spre 1sg	translate
_	spre 3sg	translate







|







 







>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>







 







>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>







 







<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<







 







<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<







 







<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<







 







|







 







<
<
<







 







>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|







 







>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>







 







<
<
<







 







<
<
<







 







<
<
<
<
<
<
<
<
<
<







 







|







 







>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>







 







>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>







20671
20672
20673
20674
20675
20676
20677
20678
20679
20680
20681
20682
20683
20684
20685
......
117064
117065
117066
117067
117068
117069
117070
117071
117072
117073
117074
117075
117076
117077
117078
117079
117080
117081
117082
117083
117084
117085
117086
117087
117088
117089
117090
117091
117092
117093
117094
117095
117096
117097
117098
117099
117100
117101
117102
117103
117104
117105
117106
117107
117108
117109
117110
117111
117112
117113
117114
117115
117116
117117
117118
117119
117120
117121
117122
117123
117124
117125
117126
117127
117128
117129
117130
117131
......
141013
141014
141015
141016
141017
141018
141019
141020
141021
141022
141023
141024
141025
141026
141027
141028
141029
141030
141031
141032
141033
141034
141035
141036
141037
141038
141039
141040
141041
141042
141043
141044
141045
141046
141047
141048
141049
141050
141051
141052
141053
141054
141055
141056
141057
141058
141059
141060
141061
141062
141063
141064
141065
141066
141067
141068
141069
141070
141071
141072
141073
141074
141075
141076
141077
141078
141079
141080
......
191330
191331
191332
191333
191334
191335
191336






































191337
191338
191339
191340
191341
191342
191343
......
191491
191492
191493
191494
191495
191496
191497





















































191498
191499
191500
191501
191502
191503
191504
......
192173
192174
192175
192176
192177
192178
192179






































192180
192181
192182
192183
192184
192185
192186
......
192325
192326
192327
192328
192329
192330
192331
192332
192333
192334
192335
192336
192337
192338
192339
......
193499
193500
193501
193502
193503
193504
193505



193506
193507
193508
193509
193510
193511
193512
......
193551
193552
193553
193554
193555
193556
193557
193558
193559
193560
193561
193562
193563
193564
193565
193566
193567
193568
193569
193570
193571
193572
193573
193574
193575
193576
193577
193578
193579
193580
193581
193582
193583
193584
193585
193586
193587
193588
193589
193590
193591
193592
193593
193594
193595
193596
193597
193598
193599
193600
193601
193602
193603
193604
193605
193606
193607
193608
193609
193610
193611
193612
193613
193614
193615
193616
193617
193618
193619
193620
193621
193622
193623
193624
193625
193626
193627
193628
193629
193630
193631
193632
193633
193634
193635
193636
193637
193638
193639
193640
193641
193642
193643
193644
193645
193646
193647
193648
193649
193650
193651
193652
193653
193654
193655
193656
193657
193658
193659
193660
193661
193662
193663
193664
193665
193666
193667
193668
193669
193670
193671
193672
193673
193674
193675
193676
193677
193678
193679
193680
193681
193682
193683
193684
193685
193686
193687
193688
193689
193690
193691
193692
193693
193694
193695
193696
193697
193698
193699
193700
193701
193702
193703
193704
193705
193706
193707
193708
193709
193710
193711
193712
193713
193714
193715
193716
193717
193718
193719
193720
193721
193722
193723
193724
193725
193726
193727
193728
193729
193730
193731
193732
193733
193734
193735
193736
193737
193738
193739
193740
193741
193742
193743
193744
193745
193746
193747
......
370139
370140
370141
370142
370143
370144
370145
370146
370147
370148
370149
370150
370151
370152
370153
370154
370155
370156
370157
370158
370159
370160
370161
370162
370163
370164
370165
370166
370167
370168
370169
370170
370171
370172
370173
370174
370175
370176
370177
370178
370179
370180
370181
370182
370183
370184
370185
370186
370187
370188
370189
370190
370191
370192
370193
370194
370195
370196
370197
370198
370199
370200
370201
370202
370203
370204
370205
370206
......
382581
382582
382583
382584
382585
382586
382587



382588
382589
382590
382591
382592
382593
382594
......
382905
382906
382907
382908
382909
382910
382911



382912
382913
382914
382915
382916
382917
382918
......
383445
383446
383447
383448
383449
383450
383451










383452
383453
383454
383455
383456
383457
383458
......
393079
393080
393081
393082
393083
393084
393085
393086
393087
393088
393089
393090
393091
393092
393093
......
409553
409554
409555
409556
409557
409558
409559
409560
409561
409562
409563
409564
409565
409566
409567
409568
409569
409570
409571
409572
409573
409574
409575
409576
409577
409578
409579
409580
409581
409582
......
422295
422296
422297
422298
422299
422300
422301
422302
422303
422304
422305
422306
422307
422308
422309
422310
422311
422312
422313
422314
422315
422316
422317
422318
422319
422320
422321
422322
422323
422324
422325
422326
422327
422328
422329
422330
422331
422332
422333
422334
422335
422336
422337
422338
422339
422340
422341
422342
422343
422344
422345
422346
422347
422348
422349
422350
422351
422352
422353
422354
422355
422356
422357
422358
422359
422360
422361
422362
_	simp 2pl	appontassiez
_	simp 3pl	appontassent
_	impe 2sg	apponte
_	impe 1pl	appontons
_	impe 2pl	appontez
_	ppas epi inv	apponté
$
apporter	1_itn___a
_	infi	apporter
_	ppre	apportant
_	ipre 1sg	apporte
_	ipre 3sg	apporte
_	spre 1sg	apporte
_	spre 3sg	apporte
_	ipre 1isg	apportè
................................................................................
_	impe 1pl	dégénérons
_	impe 2pl	dégénérez
_	ppas mas sg	dégénéré
_	ppas mas pl	dégénérés
_	ppas fem sg	dégénérée
_	ppas fem pl	dégénérées
$
dégenrer	1__t_q__a
_	infi	dégenrer
_	ppre	dégenrant
_	ipre 1sg	dégenre
_	ipre 3sg	dégenre
_	spre 1sg	dégenre
_	spre 3sg	dégenre
_	ipre 1isg	dégenrè
_	ipre 2sg	dégenres
_	spre 2sg	dégenres
_	ipre 1pl	dégenrons
_	ipre 2pl	dégenrez
_	ipre 3pl	dégenrent
_	spre 3pl	dégenrent
_	iimp 1sg	dégenrais
_	iimp 2sg	dégenrais
_	iimp 3sg	dégenrait
_	iimp 1pl	dégenrions
_	spre 1pl	dégenrions
_	iimp 2pl	dégenriez
_	spre 2pl	dégenriez
_	iimp 3pl	dégenraient
_	ipsi 1sg	dégenrai
_	ipsi 2sg	dégenras
_	ipsi 3sg	dégenra
_	ipsi 1pl	dégenrâmes
_	ipsi 2pl	dégenrâtes
_	ipsi 3pl	dégenrèrent
_	ifut 1sg	dégenrerai
_	ifut 2sg	dégenreras
_	ifut 3sg	dégenrera
_	ifut 1pl	dégenrerons
_	ifut 2pl	dégenrerez
_	ifut 3pl	dégenreront
_	cond 1sg	dégenrerais
_	cond 2sg	dégenrerais
_	cond 3sg	dégenrerait
_	cond 1pl	dégenrerions
_	cond 2pl	dégenreriez
_	cond 3pl	dégenreraient
_	simp 1sg	dégenrasse
_	simp 2sg	dégenrasses
_	simp 3sg	dégenrât
_	simp 1pl	dégenrassions
_	simp 2pl	dégenrassiez
_	simp 3pl	dégenrassent
_	impe 2sg	dégenre
_	impe 1pl	dégenrons
_	impe 2pl	dégenrez
_	ppas mas sg	dégenré
_	ppas mas pl	dégenrés
_	ppas fem sg	dégenrée
_	ppas fem pl	dégenrées
$
dégermer	1__t___zz
_	infi	dégermer
_	ppre	dégermant
_	ipre 1sg	dégerme
_	ipre 3sg	dégerme
_	spre 1sg	dégerme
_	spre 3sg	dégerme
................................................................................
_	impe 1pl	désentravons
_	impe 2pl	désentravez
_	ppas mas sg	désentravé
_	ppas mas pl	désentravés
_	ppas fem sg	désentravée
_	ppas fem pl	désentravées
$
désentrelacer	1__t_q__a
_	infi	désentrelacer
_	ppre	désentrelaçant
_	ipre 1sg	désentrelace
_	ipre 3sg	désentrelace
_	spre 1sg	désentrelace
_	spre 3sg	désentrelace
_	ipre 1isg	désentrelacè
_	ipre 2sg	désentrelaces
_	spre 2sg	désentrelaces
_	ipre 1pl	désentrelaçons
_	ipre 2pl	désentrelacez
_	ipre 3pl	désentrelacent
_	spre 3pl	désentrelacent
_	iimp 1sg	désentrelaçais
_	iimp 2sg	désentrelaçais
_	iimp 3sg	désentrelaçait
_	iimp 1pl	désentrelacions
_	spre 1pl	désentrelacions
_	iimp 2pl	désentrelaciez
_	spre 2pl	désentrelaciez
_	iimp 3pl	désentrelaçaient
_	ipsi 1sg	désentrelaçai
_	ipsi 2sg	désentrelaças
_	ipsi 3sg	désentrelaça
_	ipsi 1pl	désentrelaçâmes
_	ipsi 2pl	désentrelaçâtes
_	ipsi 3pl	désentrelacèrent
_	ifut 1sg	désentrelacerai
_	ifut 2sg	désentrelaceras
_	ifut 3sg	désentrelacera
_	ifut 1pl	désentrelacerons
_	ifut 2pl	désentrelacerez
_	ifut 3pl	désentrelaceront
_	cond 1sg	désentrelacerais
_	cond 2sg	désentrelacerais
_	cond 3sg	désentrelacerait
_	cond 1pl	désentrelacerions
_	cond 2pl	désentrelaceriez
_	cond 3pl	désentrelaceraient
_	simp 1sg	désentrelaçasse
_	simp 2sg	désentrelaçasses
_	simp 3sg	désentrelaçât
_	simp 1pl	désentrelaçassions
_	simp 2pl	désentrelaçassiez
_	simp 3pl	désentrelaçassent
_	impe 2sg	désentrelace
_	impe 1pl	désentrelaçons
_	impe 2pl	désentrelacez
_	ppas mas sg	désentrelacé
_	ppas mas pl	désentrelacés
_	ppas fem sg	désentrelacée
_	ppas fem pl	désentrelacées
$
désenvaser	1__t___zz
_	infi	désenvaser
_	ppre	désenvasant
_	ipre 1sg	désenvase
_	ipre 3sg	désenvase
_	spre 1sg	désenvase
_	spre 3sg	désenvase
................................................................................
_	impe 1pl	entraimons
_	impe 2pl	entraimez
_	ppas mas sg	entraimé
_	ppas mas pl	entraimés
_	ppas fem sg	entraimée
_	ppas fem pl	entraimées
$






































entrainer	1__t_q__a
_	infi	entrainer
_	ppre	entrainant
_	ipre 1sg	entraine
_	ipre 3sg	entraine
_	spre 1sg	entraine
_	spre 3sg	entraine
................................................................................
_	simp 1pl	entraperçussions
_	simp 2pl	entraperçussiez
_	simp 3pl	entraperçussent
_	impe 2sg	entraperçois
_	impe 1pl	entrapercevons
_	impe 2pl	entrapercevez
$





















































entraver	1__t___zz
_	infi	entraver
_	ppre	entravant
_	ipre 1sg	entrave
_	ipre 3sg	entrave
_	spre 1sg	entrave
_	spre 3sg	entrave
................................................................................
_	impe 1pl	entrégorgeons
_	impe 2pl	entrégorgez
_	ppas mas sg	entrégorgé
_	ppas mas pl	entrégorgés
_	ppas fem sg	entrégorgée
_	ppas fem pl	entrégorgées
$






































entrehaïr	2____r_e_
_	infi	entrehaïr
_	ppre	entrehaïssant
_	ppas mas sg	entrehaï
_	ppas mas pl	entrehaïs
_	ppas fem sg	entrehaïe
_	ppas fem pl	entrehaïes
................................................................................
_	impe 1pl	entre-heurtons
_	impe 2pl	entre-heurtez
_	ppas mas sg	entre-heurté
_	ppas mas pl	entre-heurtés
_	ppas fem sg	entre-heurtée
_	ppas fem pl	entre-heurtées
$
entrelacer	1__t_q__a
_	infi	entrelacer
_	ppre	entrelaçant
_	ipre 1sg	entrelace
_	ipre 3sg	entrelace
_	spre 1sg	entrelace
_	spre 3sg	entrelace
_	ipre 1isg	entrelacè
................................................................................
_	impe 1pl	entrevoûtons
_	impe 2pl	entrevoûtez
_	ppas mas sg	entrevoûté
_	ppas mas pl	entrevoûtés
_	ppas fem sg	entrevoûtée
_	ppas fem pl	entrevoûtées
$



entrouvrir	3__t_q_zz
_	infi	entrouvrir
_	ppre	entrouvrant
_	ppas mas sg	entrouvert
_	ppas mas pl	entrouverts
_	ppas fem sg	entrouverte
_	ppas fem pl	entrouvertes
................................................................................
_	simp 1pl	entrouvrissions
_	simp 2pl	entrouvrissiez
_	simp 3pl	entrouvrissent
_	impe 2sg	entrouvre
_	impe 1pl	entrouvrons
_	impe 2pl	entrouvrez
$
entr’aimer	1____r_e_
_	infi	entr’aimer
_	ppre	entr’aimant
_	ipre 3sg	entr’aime
_	spre 3sg	entr’aime
_	ipre 1pl	entr’aimons
_	ipre 2pl	entr’aimez
_	ipre 3pl	entr’aiment
_	spre 3pl	entr’aiment
_	iimp 3sg	entr’aimait
_	iimp 1pl	entr’aimions
_	spre 1pl	entr’aimions
_	iimp 2pl	entr’aimiez
_	spre 2pl	entr’aimiez
_	iimp 3pl	entr’aimaient
_	ipsi 3sg	entr’aima
_	ipsi 1pl	entr’aimâmes
_	ipsi 2pl	entr’aimâtes
_	ipsi 3pl	entr’aimèrent
_	ifut 3sg	entr’aimera
_	ifut 1pl	entr’aimerons
_	ifut 2pl	entr’aimerez
_	ifut 3pl	entr’aimeront
_	cond 3sg	entr’aimerait
_	cond 1pl	entr’aimerions
_	cond 2pl	entr’aimeriez
_	cond 3pl	entr’aimeraient
_	simp 3sg	entr’aimât
_	simp 1pl	entr’aimassions
_	simp 2pl	entr’aimassiez
_	simp 3pl	entr’aimassent
_	impe 1pl	entr’aimons
_	impe 2pl	entr’aimez
_	ppas mas sg	entr’aimé
_	ppas mas pl	entr’aimés
_	ppas fem sg	entr’aimée
_	ppas fem pl	entr’aimées
$
entr’apercevoir	3__t_q_zz
_	infi	entr’apercevoir
_	ppre	entr’apercevant
_	ppas mas sg	entr’aperçu
_	ppas mas pl	entr’aperçus
_	ppas fem sg	entr’aperçue
_	ppas fem pl	entr’aperçues
_	ipre 1sg	entr’aperçois
_	ipre 2sg	entr’aperçois
_	ipre 3sg	entr’aperçoit
_	ipre 1pl	entr’apercevons
_	ipre 2pl	entr’apercevez
_	ipre 3pl	entr’aperçoivent
_	spre 3pl	entr’aperçoivent
_	iimp 1sg	entr’apercevais
_	iimp 2sg	entr’apercevais
_	iimp 3sg	entr’apercevait
_	iimp 1pl	entr’apercevions
_	spre 1pl	entr’apercevions
_	iimp 2pl	entr’aperceviez
_	spre 2pl	entr’aperceviez
_	iimp 3pl	entr’apercevaient
_	ipsi 1sg	entr’aperçus
_	ipsi 2sg	entr’aperçus
_	ipsi 3sg	entr’aperçut
_	ipsi 1pl	entr’aperçûmes
_	ipsi 2pl	entr’aperçûtes
_	ipsi 3pl	entr’aperçurent
_	ifut 1sg	entr’apercevrai
_	ifut 2sg	entr’apercevras
_	ifut 3sg	entr’apercevra
_	ifut 1pl	entr’apercevrons
_	ifut 2pl	entr’apercevrez
_	ifut 3pl	entr’apercevront
_	cond 1sg	entr’apercevrais
_	cond 2sg	entr’apercevrais
_	cond 3sg	entr’apercevrait
_	cond 1pl	entr’apercevrions
_	cond 2pl	entr’apercevriez
_	cond 3pl	entr’apercevraient
_	spre 1sg	entr’aperçoive
_	spre 3sg	entr’aperçoive
_	spre 2sg	entr’aperçoives
_	simp 1sg	entr’aperçusse
_	simp 2sg	entr’aperçusses
_	simp 3sg	entr’aperçût
_	simp 1pl	entr’aperçussions
_	simp 2pl	entr’aperçussiez
_	simp 3pl	entr’aperçussent
_	impe 2sg	entr’aperçois
_	impe 1pl	entr’apercevons
_	impe 2pl	entr’apercevez
$
entr’égorger	1____r_e_
_	infi	entr’égorger
_	ppre	entr’égorgeant
_	ipre 3sg	entr’égorge
_	spre 3sg	entr’égorge
_	ipre 1pl	entr’égorgeons
_	ipre 2pl	entr’égorgez
_	ipre 3pl	entr’égorgent
_	spre 3pl	entr’égorgent
_	iimp 3sg	entr’égorgeait
_	iimp 1pl	entr’égorgions
_	spre 1pl	entr’égorgions
_	iimp 2pl	entr’égorgiez
_	spre 2pl	entr’égorgiez
_	iimp 3pl	entr’égorgeaient
_	ipsi 3sg	entr’égorgea
_	ipsi 1pl	entr’égorgeâmes
_	ipsi 2pl	entr’égorgeâtes
_	ipsi 3pl	entr’égorgèrent
_	ifut 3sg	entr’égorgera
_	ifut 1pl	entr’égorgerons
_	ifut 2pl	entr’égorgerez
_	ifut 3pl	entr’égorgeront
_	cond 3sg	entr’égorgerait
_	cond 1pl	entr’égorgerions
_	cond 2pl	entr’égorgeriez
_	cond 3pl	entr’égorgeraient
_	simp 3sg	entr’égorgeât
_	simp 1pl	entr’égorgeassions
_	simp 2pl	entr’égorgeassiez
_	simp 3pl	entr’égorgeassent
_	impe 1pl	entr’égorgeons
_	impe 2pl	entr’égorgez
_	ppas mas sg	entr’égorgé
_	ppas mas pl	entr’égorgés
_	ppas fem sg	entr’égorgée
_	ppas fem pl	entr’égorgées
$
entr’hiverner	1__t___zz
_	infi	entr’hiverner
$
entrouvrir	3__t_q_zz
_	infi	entrouvrir
_	ppre	entrouvrant
_	ppas mas sg	entrouvert
_	ppas mas pl	entrouverts
_	ppas fem sg	entrouverte
_	ppas fem pl	entrouvertes
_	ipre 1sg	entrouvre
_	ipre 3sg	entrouvre
_	spre 1sg	entrouvre
_	spre 3sg	entrouvre
_	ipre 2sg	entrouvres
_	ipre 1pl	entrouvrons
_	ipre 2pl	entrouvrez
_	ipre 3pl	entrouvrent
_	spre 3pl	entrouvrent
_	iimp 1sg	entrouvrais
_	iimp 2sg	entrouvrais
_	iimp 3sg	entrouvrait
_	iimp 1pl	entrouvrions
_	spre 1pl	entrouvrions
_	iimp 2pl	entrouvriez
_	spre 2pl	entrouvriez
_	iimp 3pl	entrouvraient
_	ipsi 1sg	entrouvris
_	ipsi 2sg	entrouvris
_	ipsi 3sg	entrouvrit
_	ipsi 1pl	entrouvrîmes
_	ipsi 2pl	entrouvrîtes
_	ipsi 3pl	entrouvrirent
_	ifut 1sg	entrouvrirai
_	ifut 2sg	entrouvriras
_	ifut 3sg	entrouvrira
_	ifut 1pl	entrouvrirons
_	ifut 2pl	entrouvrirez
_	ifut 3pl	entrouvriront
_	cond 1sg	entrouvrirais
_	cond 2sg	entrouvrirais
_	cond 3sg	entrouvrirait
_	cond 1pl	entrouvririons
_	cond 2pl	entrouvririez
_	cond 3pl	entrouvriraient
_	simp 1sg	entrouvrisse
_	simp 2sg	entrouvrisses
_	simp 3sg	entrouvrît
_	simp 1pl	entrouvrissions
_	simp 2pl	entrouvrissiez
_	simp 3pl	entrouvrissent
_	impe 2sg	entrouvre
_	impe 1pl	entrouvrons
_	impe 2pl	entrouvrez
$
entuber	1__t___zz
_	infi	entuber
_	ppre	entubant
_	ipre 1sg	entube
_	ipre 3sg	entube
_	spre 1sg	entube
................................................................................
_	impe 1pl	resocialisons
_	impe 2pl	resocialisez
_	ppas mas sg	resocialisé
_	ppas mas pl	resocialisés
_	ppas fem sg	resocialisée
_	ppas fem pl	resocialisées
$
resolidifier	1_it_q__a
_	infi	resolidifier
_	ppre	resolidifiant
_	ipre 1sg	resolidifie
_	ipre 3sg	resolidifie
_	spre 1sg	resolidifie
_	spre 3sg	resolidifie
_	ipre 1isg	resolidifiè
_	ipre 2sg	resolidifies
_	spre 2sg	resolidifies
_	ipre 1pl	resolidifions
_	ipre 2pl	resolidifiez
_	ipre 3pl	resolidifient
_	spre 3pl	resolidifient
_	iimp 1sg	resolidifiais
_	iimp 2sg	resolidifiais
_	iimp 3sg	resolidifiait
_	iimp 1pl	resolidifiions
_	spre 1pl	resolidifiions
_	iimp 2pl	resolidifiiez
_	spre 2pl	resolidifiiez
_	iimp 3pl	resolidifiaient
_	ipsi 1sg	resolidifiai
_	ipsi 2sg	resolidifias
_	ipsi 3sg	resolidifia
_	ipsi 1pl	resolidifiâmes
_	ipsi 2pl	resolidifiâtes
_	ipsi 3pl	resolidifièrent
_	ifut 1sg	resolidifierai
_	ifut 2sg	resolidifieras
_	ifut 3sg	resolidifiera
_	ifut 1pl	resolidifierons
_	ifut 2pl	resolidifierez
_	ifut 3pl	resolidifieront
_	cond 1sg	resolidifierais
_	cond 2sg	resolidifierais
_	cond 3sg	resolidifierait
_	cond 1pl	resolidifierions
_	cond 2pl	resolidifieriez
_	cond 3pl	resolidifieraient
_	simp 1sg	resolidifiasse
_	simp 2sg	resolidifiasses
_	simp 3sg	resolidifiât
_	simp 1pl	resolidifiassions
_	simp 2pl	resolidifiassiez
_	simp 3pl	resolidifiassent
_	impe 2sg	resolidifie
_	impe 1pl	resolidifions
_	impe 2pl	resolidifiez
_	ppas mas sg	resolidifié
_	ppas mas pl	resolidifiés
_	ppas fem sg	resolidifiée
_	ppas fem pl	resolidifiées
$
résonner	1_i____zz
_	infi	résonner
_	ppre	résonnant
_	ipre 1sg	résonne
_	ipre 3sg	résonne
_	spre 1sg	résonne
_	spre 3sg	résonne
................................................................................
_	impe 1pl	rythmons
_	impe 2pl	rythmez
_	ppas mas sg	rythmé
_	ppas mas pl	rythmés
_	ppas fem sg	rythmée
_	ppas fem pl	rythmées
$



sabler	1_it___zz
_	infi	sabler
_	ppre	sablant
_	ipre 1sg	sable
_	ipre 3sg	sable
_	spre 1sg	sable
_	spre 3sg	sable
................................................................................
_	impe 1pl	sabrons
_	impe 2pl	sabrez
_	ppas mas sg	sabré
_	ppas mas pl	sabrés
_	ppas fem sg	sabrée
_	ppas fem pl	sabrées
$



sacagner	1__t___zz
_	infi	sacagner
_	ppre	sacagnant
_	ipre 1sg	sacagne
_	ipre 3sg	sacagne
_	spre 1sg	sacagne
_	spre 3sg	sacagne
................................................................................
_	impe 1pl	safranons
_	impe 2pl	safranez
_	ppas mas sg	safrané
_	ppas mas pl	safranés
_	ppas fem sg	safranée
_	ppas fem pl	safranées
$










saietter	1__t___zz
_	infi	saietter
_	ppre	saiettant
_	ipre 1sg	saiette
_	ipre 3sg	saiette
_	spre 1sg	saiette
_	spre 3sg	saiette
................................................................................
_	impe 1pl	solidarisons
_	impe 2pl	solidarisez
_	ppas mas sg	solidarisé
_	ppas mas pl	solidarisés
_	ppas fem sg	solidarisée
_	ppas fem pl	solidarisées
$
solidifier	1_it_q__a
_	infi	solidifier
_	ppre	solidifiant
_	ipre 1sg	solidifie
_	ipre 3sg	solidifie
_	spre 1sg	solidifie
_	spre 3sg	solidifie
_	ipre 1isg	solidifiè
................................................................................
_	impe 1pl	systématisons
_	impe 2pl	systématisez
_	ppas mas sg	systématisé
_	ppas mas pl	systématisés
_	ppas fem sg	systématisée
_	ppas fem pl	systématisées
$
s’abader	1____p_e_
_	infi	s’abader
$
s’abriller	1____p_e_
_	infi	s’abriller
$
s’agir	2____p_e_
_	infi	s’agir
_	ppre	s’agissant
_	ipre 3sg	s’agit
_	iimp 3sg	s’agissait
_	ifut 3sg	s’agira
_	cond 3sg	s’agirait
_	spre 3sg	s’agisse
_	simp 3sg	s’agît
$
tabasser	1__t_q_zz
_	infi	tabasser
_	ppre	tabassant
_	ipre 1sg	tabasse
_	ipre 3sg	tabasse
_	spre 1sg	tabasse
_	spre 3sg	tabasse
................................................................................
_	impe 1pl	transitons
_	impe 2pl	transitez
_	ppas mas sg	transité
_	ppas mas pl	transités
_	ppas fem sg	transitée
_	ppas fem pl	transitées
$
transitionner	1_i_____a
_	infi	transitionner
_	ppre	transitionnant
_	ipre 1sg	transitionne
_	ipre 3sg	transitionne
_	spre 1sg	transitionne
_	spre 3sg	transitionne
_	ipre 1isg	transitionnè
_	ipre 2sg	transitionnes
_	spre 2sg	transitionnes
_	ipre 1pl	transitionnons
_	ipre 2pl	transitionnez
_	ipre 3pl	transitionnent
_	spre 3pl	transitionnent
_	iimp 1sg	transitionnais
_	iimp 2sg	transitionnais
_	iimp 3sg	transitionnait
_	iimp 1pl	transitionnions
_	spre 1pl	transitionnions
_	iimp 2pl	transitionniez
_	spre 2pl	transitionniez
_	iimp 3pl	transitionnaient
_	ipsi 1sg	transitionnai
_	ipsi 2sg	transitionnas
_	ipsi 3sg	transitionna
_	ipsi 1pl	transitionnâmes
_	ipsi 2pl	transitionnâtes
_	ipsi 3pl	transitionnèrent
_	ifut 1sg	transitionnerai
_	ifut 2sg	transitionneras
_	ifut 3sg	transitionnera
_	ifut 1pl	transitionnerons
_	ifut 2pl	transitionnerez
_	ifut 3pl	transitionneront
_	cond 1sg	transitionnerais
_	cond 2sg	transitionnerais
_	cond 3sg	transitionnerait
_	cond 1pl	transitionnerions
_	cond 2pl	transitionneriez
_	cond 3pl	transitionneraient
_	simp 1sg	transitionnasse
_	simp 2sg	transitionnasses
_	simp 3sg	transitionnât
_	simp 1pl	transitionnassions
_	simp 2pl	transitionnassiez
_	simp 3pl	transitionnassent
_	impe 2sg	transitionne
_	impe 1pl	transitionnons
_	impe 2pl	transitionnez
_	ppas mas sg	transitionné
_	ppas mas pl	transitionnés
_	ppas fem sg	transitionnée
_	ppas fem pl	transitionnées
$
translater	1__t___zz
_	infi	translater
_	ppre	translatant
_	ipre 1sg	translate
_	ipre 3sg	translate
_	spre 1sg	translate
_	spre 3sg	translate

Modified gc_lang/fr/data/dictDecl.txt from [86b8209873] to [3a7a8fac4c].

20997
20998
20999
21000
21001
21002
21003




21004
21005
21006
21007
21008
21009
21010
.....
22538
22539
22540
22541
22542
22543
22544




22545
22546
22547
22548
22549
22550
22551
.....
23573
23574
23575
23576
23577
23578
23579
23580
23581
23582
23583
23584
23585
23586
23587
23588
23589
23590
23591
23592
23593
23594
23595
23596
23597
23598
23599
23600
23601
23602
23603
23604
23605










23606
23607
23608
23609
23610
23611
23612
.....
32435
32436
32437
32438
32439
32440
32441




32442
32443
32444
32445
32446
32447
32448
.....
47667
47668
47669
47670
47671
47672
47673
47674
47675
47676
47677
47678
47679
47680
47681
47682
47683
47684
.....
47761
47762
47763
47764
47765
47766
47767




47768
47769
47770
47771
47772
47773
47774
.....
70454
70455
70456
70457
70458
70459
70460






70461
70462
70463
70464
70465
70466
70467
.....
78418
78419
78420
78421
78422
78423
78424
78425
78426
78427
78428
78429
78430
78431
78432
78433
78434
78435
78436
78437
78438
.....
98456
98457
98458
98459
98460
98461
98462
98463
98464
98465
98466
98467
98468
98469
98470
98471
98472
98473
......
120164
120165
120166
120167
120168
120169
120170




120171
120172
120173
120174
120175
120176
120177
......
120471
120472
120473
120474
120475
120476
120477




120478
120479
120480
120481
120482
120483
120484
......
125452
125453
125454
125455
125456
125457
125458






125459
125460
125461
125462
125463
125464
125465
......
134167
134168
134169
134170
134171
134172
134173
134174
134175
134176
134177
134178
134179
134180
134181
134182
134183
134184
134185
134186
134187
134188
134189
134190
134191
134192
134193
134194
134195
......
135285
135286
135287
135288
135289
135290
135291
135292
135293
135294
135295
135296
135297
135298
135299
135300
135301
135302
135303








135304
135305
135306
135307
135308
135309
135310
......
146208
146209
146210
146211
146212
146213
146214
146215
146216
146217
146218
146219
146220
146221
146222




146223
146224
146225
146226
146227
146228
146229
146230
146231
146232
......
149186
149187
149188
149189
149190
149191
149192





149193
149194
149195
149196
149197
149198
149199
......
151471
151472
151473
151474
151475
151476
151477






151478
151479
151480
151481
151482
151483
151484
......
163202
163203
163204
163205
163206
163207
163208








163209
163210
163211
163212
163213
163214
163215
......
169574
169575
169576
169577
169578
169579
169580




169581
169582
169583
169584
169585
169586
169587
......
183644
183645
183646
183647
183648
183649
183650




183651
183652
183653
183654
183655
183656
183657
......
192667
192668
192669
192670
192671
192672
192673




192674
192675
192676
192677
192678
192679
192680
......
192718
192719
192720
192721
192722
192723
192724
192725
192726
192727
192728
192729
192730
192731
192732
192733
192734
192735
192736
192737
192738
......
195896
195897
195898
195899
195900
195901
195902
195903
195904
195905
195906
195907
195908
195909
195910
195911
195912
195913
195914
195915
195916
195917
195918
195919
195920
195921
195922
195923
195924
195925
195926
195927
195928
195929
195930
195931
195932
195933
195934
195935
195936
195937
195938














195939
195940
195941
195942
195943
195944
195945
......
196839
196840
196841
196842
196843
196844
196845
196846
196847
196848
196849
196850
196851
196852
196853
196854
196855
196856
196857
196858
......
197767
197768
197769
197770
197771
197772
197773




197774
197775
197776
197777
197778
197779
197780
......
198233
198234
198235
198236
198237
198238
198239






198240
198241
198242
198243
198244
198245
198246
......
201963
201964
201965
201966
201967
201968
201969




201970
201971
201972
201973
201974
201975
201976
......
202808
202809
202810
202811
202812
202813
202814




202815
202816
202817
202818
202819
202820
202821
......
203854
203855
203856
203857
203858
203859
203860




203861
203862
203863
203864
203865
203866
203867
......
230574
230575
230576
230577
230578
230579
230580




230581
230582
230583
230584
230585
230586
230587
......
230923
230924
230925
230926
230927
230928
230929




230930
230931
230932
230933
230934
230935
230936
......
244237
244238
244239
244240
244241
244242
244243




244244
244245
244246
244247
244248
244249
244250
......
244431
244432
244433
244434
244435
244436
244437




244438
244439
244440
244441
244442
244443
244444
......
251374
251375
251376
251377
251378
251379
251380




251381
251382
251383
251384
251385
251386
251387
......
251520
251521
251522
251523
251524
251525
251526




251527
251528
251529
251530
251531
251532
251533
_	adj mas sg	autocollant
_	adj mas pl	autocollants
$
autocommutateur	S*()
_	nom mas sg	autocommutateur
_	nom mas pl	autocommutateurs
$




autoconcurrence	S*()
_	nom fem sg	autoconcurrence
_	nom fem pl	autoconcurrences
$
autoconditionnement	S*()
_	nom mas sg	autoconditionnement
_	nom mas pl	autoconditionnements
................................................................................
_	nom mas sg	aviron
_	nom mas pl	avirons
$
avirulence	S*()
_	nom fem sg	avirulence
_	nom fem pl	avirulences
$




aviso	S*()
_	nom mas sg	aviso
_	nom mas pl	avisos
$
avitaillement	S*()
_	nom mas sg	avitaillement
_	nom mas pl	avitaillements
................................................................................
_	nom mas sg	baguier
_	nom mas pl	baguiers
$
baguiste	S.()
_	nom epi sg	baguiste
_	nom epi pl	baguistes
$
baha'ie	F.()
_	nom adj fem sg	baha'ie
_	nom adj fem pl	baha'ies
_	nom adj mas sg	baha'i
_	nom adj mas pl	baha'is
$
bahaïe	F.()
_	nom adj fem sg	bahaïe
_	nom adj fem pl	bahaïes
_	nom adj mas sg	bahaï
_	nom adj mas pl	bahaïs
$
baha'isme	S.()
_	nom mas sg	baha'isme
_	nom mas pl	baha'ismes
$
bahaïsme	S.()
_	nom mas sg	bahaïsme
_	nom mas pl	bahaïsmes
$
bahamienne	F.()
_	nom adj fem sg	bahamienne
_	nom adj fem pl	bahamiennes
_	nom adj mas sg	bahamien
_	nom adj mas pl	bahamiens
$










bahreïnie	F.()
_	nom adj fem sg	bahreïnie
_	nom adj fem pl	bahreïnies
_	nom adj mas sg	bahreïni
_	nom adj mas pl	bahreïnis
$
baht	S.()
................................................................................
_	nom fem sg	bouillasse
_	nom fem pl	bouillasses
$
bouille	S.()
_	nom fem sg	bouille
_	nom fem pl	bouilles
$




bouilleuse	F.()
_	nom fem sg	bouilleuse
_	nom fem pl	bouilleuses
_	nom mas sg	bouilleur
_	nom mas pl	bouilleurs
$
bouillie	S.()
................................................................................
_	nom adj epi sg	chti
_	nom adj epi pl	chtis
$
chtimi	S.()
_	nom adj epi sg	chtimi
_	nom adj epi pl	chtimis
$
ch'timi	S.()
_	nom adj epi sg	ch'timi
_	nom adj epi pl	ch'timis
$
chtonienne	F.()
_	adj fem sg	chtonienne
_	adj fem pl	chtoniennes
_	adj mas sg	chtonien
_	adj mas pl	chtoniens
$
chtouille	S.()
................................................................................
_	adj mas sg	chypré
_	adj mas pl	chyprés
$
chypriote	S.()
_	nom adj epi sg	chypriote
_	nom adj epi pl	chypriotes
$




ciabatta	S.()
_	nom fem sg	ciabatta
_	nom fem pl	ciabattas
$
cibiche	S.()
_	nom fem sg	cibiche
_	nom fem pl	cibiches
................................................................................
_	nom mas sg	dépolissage
_	nom mas pl	dépolissages
$
dépolitisation	S.()
_	nom fem sg	dépolitisation
_	nom fem pl	dépolitisations
$






dépollution	S.()
_	nom fem sg	dépollution
_	nom fem pl	dépollutions
$
dépolymérisation	S.()
_	nom fem sg	dépolymérisation
_	nom fem pl	dépolymérisations
................................................................................
_	nom epi sg	droguiste
_	nom epi pl	droguistes
$
droïde	S.()
_	nom mas sg	droïde
_	nom mas pl	droïdes
$
droit-de-l'hommisme	S.()
_	nom mas sg	droit-de-l'hommisme
_	nom mas pl	droit-de-l'hommismes
$
droit-de-l'hommiste	S.()
_	nom adj epi sg	droit-de-l'hommiste
_	nom adj epi pl	droit-de-l'hommistes
$
droite	F.()
_	nom adj fem sg	droite
_	nom adj fem pl	droites
_	nom adj mas sg	droit
_	nom adj mas pl	droits
$
................................................................................
_	nom fem pl	finances
$
financement	S.()
_	nom mas sg	financement
_	nom mas pl	financements
$
financeuse	F.()
_	nom fem sg	financeuse
_	nom fem pl	financeuses
_	nom mas sg	financeur
_	nom mas pl	financeurs
$
financiarisation	S.()
_	nom fem sg	financiarisation
_	nom fem pl	financiarisations
$
financiarisme	S.()
_	nom mas sg	financiarisme
................................................................................
_	adj epi sg	hyperchrome
_	adj epi pl	hyperchromes
$
hyperchromie	S*()
_	nom fem sg	hyperchromie
_	nom fem pl	hyperchromies
$




hypercomplexe	S*()
_	adj epi sg	hypercomplexe
_	adj epi pl	hypercomplexes
$
hyperconformisme	S*()
_	nom mas sg	hyperconformisme
_	nom mas pl	hyperconformismes
................................................................................
_	adj epi sg	hyperonymique
_	adj epi pl	hyperonymiques
$
hyperostose	S*()
_	nom fem sg	hyperostose
_	nom fem pl	hyperostoses
$




hyperparasite	S*()
_	nom mas sg	hyperparasite
_	nom mas pl	hyperparasites
$
hyperparathyroïdie	S*()
_	nom fem sg	hyperparathyroïdie
_	nom fem pl	hyperparathyroïdies
................................................................................
_	adj epi sg	increvable
_	adj epi pl	increvables
$
incriminable	S*()
_	adj epi sg	incriminable
_	adj epi pl	incriminables
$






incrimination	S*()
_	nom fem sg	incrimination
_	nom fem pl	incriminations
$
incristallisable	S*()
_	adj epi sg	incristallisable
_	adj epi pl	incristallisables
................................................................................
_	nom mas sg	jéjuno-iléon
_	nom mas pl	jéjuno-iléons
$
jéjunum	S.()
_	nom mas sg	jéjunum
_	nom mas pl	jéjunums
$
je-m'en-fichisme	S.()
_	nom mas sg	je-m'en-fichisme
_	nom mas pl	je-m'en-fichismes
$
je-m'en-fichiste	S.()
_	nom adj epi sg	je-m'en-fichiste
_	nom adj epi pl	je-m'en-fichistes
$
je-m'en-foutisme	S.()
_	nom mas sg	je-m'en-foutisme
_	nom mas pl	je-m'en-foutismes
$
je-m'en-foutiste	S.()
_	nom adj epi sg	je-m'en-foutiste
_	nom adj epi pl	je-m'en-foutistes
$
jennérienne	F.()
_	adj fem sg	jennérienne
_	adj fem pl	jennériennes
_	adj mas sg	jennérien
_	adj mas pl	jennériens
$
................................................................................
_	nom mas sg	jusnaturalisme
_	nom mas pl	jusnaturalismes
$
jusnaturaliste	S.()
_	nom adj epi sg	jusnaturaliste
_	nom adj epi pl	jusnaturalistes
$
jusqu'au-boutisme	S.()
_	nom mas sg	jusqu'au-boutisme
_	nom mas pl	jusqu'au-boutismes
$
jusqu'au-boutiste	S.()
_	nom epi sg	jusqu'au-boutiste
_	nom epi pl	jusqu'au-boutistes
$
jusquiame	S.()
_	nom fem sg	jusquiame
_	nom fem pl	jusquiames
$








jussiée	S.()
_	nom fem sg	jussiée
_	nom fem pl	jussiées
$
jussion	S.()
_	nom fem sg	jussion
_	nom fem pl	jussions
................................................................................
_	nom mas sg	mammouth
_	nom mas pl	mammouths
$
mammy	S.()
_	nom fem sg	mammy
_	nom fem pl	mammys
$
mam'selle	S.()
_	nom fem sg	mam'selle
_	nom fem pl	mam'selles
$
mamy	S.()
_	nom fem sg	mamy
_	nom fem pl	mamys
$




mam'zelle	S.()
_	nom fem sg	mam'zelle
_	nom fem pl	mam'zelles
$
man	S.()
_	nom mas sg	man
_	nom mas pl	mans
$
mana	S.()
_	nom mas sg	mana
................................................................................
_	nom mas sg	méaculpa
_	nom mas pl	méaculpas
$
méandre	S.()
_	nom mas sg	méandre
_	nom mas pl	méandres
$





méandriforme	S.()
_	adj epi sg	méandriforme
_	adj epi pl	méandriformes
$
méandrine	S.()
_	nom fem sg	méandrine
_	nom fem pl	méandrines
................................................................................
_	adj mas sg	mesmérien
_	adj mas pl	mesmériens
$
mesmérisme	S.()
_	nom mas sg	mesmérisme
_	nom mas pl	mesmérismes
$






mésoblaste	S.()
_	nom mas sg	mésoblaste
_	nom mas pl	mésoblastes
$
mésoblastique	S.()
_	adj epi sg	mésoblastique
_	adj epi pl	mésoblastiques
................................................................................
_	nom epi sg	neuroanatomiste
_	nom epi pl	neuroanatomistes
$
neuro-anatomiste	S.()
_	nom epi sg	neuro-anatomiste
_	nom epi pl	neuro-anatomistes
$








neurobiochimie	S.()
_	nom fem sg	neurobiochimie
_	nom fem pl	neurobiochimies
$
neurobiochimique	S.()
_	adj epi sg	neurobiochimique
_	adj epi pl	neurobiochimiques
................................................................................
_	nom fem sg	orchidacée
_	nom fem pl	orchidacées
$
orchidée	S*()
_	nom fem sg	orchidée
_	nom fem pl	orchidées
$




orchiépididymite	S*()
_	nom fem sg	orchiépididymite
_	nom fem pl	orchiépididymites
$
orchi-épididymite	S*()
_	nom fem sg	orchi-épididymite
_	nom fem pl	orchi-épididymites
................................................................................
_	nom mas pl	piémonts
$
piémontaise	F.()
_	nom adj fem sg	piémontaise
_	nom adj fem pl	piémontaises
_	nom adj mas inv	piémontais
$




piercing	S.()
_	nom mas sg	piercing
_	nom mas pl	piercings
$
piéride	S.()
_	nom fem sg	piéride
_	nom fem pl	piérides
................................................................................
_	nom fem sg	présidentialisation
_	nom fem pl	présidentialisations
$
présidentialisme	S.()
_	nom mas sg	présidentialisme
_	nom mas pl	présidentialismes
$




présidentielle	F.()
_	adj fem sg	présidentielle
_	adj fem pl	présidentielles
_	adj mas sg	présidentiel
_	adj mas pl	présidentiels
$
présidial	X.()
................................................................................
_	nom adj fem pl	présomptueuses
_	nom adj mas inv	présomptueux
$
présonorisation	S.()
_	nom fem sg	présonorisation
_	nom fem pl	présonorisations
$
presqu'ile	S.()
_	nom fem sg	presqu'ile
_	nom fem pl	presqu'iles
$
presqu'île	S.()
_	nom fem sg	presqu'île
_	nom fem pl	presqu'îles
$
pressabilité	S.()
_	nom fem sg	pressabilité
_	nom fem pl	pressabilités
$
pressage	S.()
_	nom mas sg	pressage
................................................................................
_	adj mas sg	prudentiel
_	adj mas pl	prudentiels
$
pruderie	S.()
_	nom fem sg	pruderie
_	nom fem pl	pruderies
$
prud'homale	W.()
_	adj fem sg	prud'homale
_	adj fem pl	prud'homales
_	adj mas sg	prud'homal
_	adj mas pl	prud'homaux
$
prud'homie	S.()
_	nom fem sg	prud'homie
_	nom fem pl	prud'homies
$
prudhommale	W.()
_	adj fem sg	prudhommale
_	adj fem pl	prudhommales
_	adj mas sg	prudhommal
_	adj mas pl	prudhommaux
$
prudhomme	S.()
_	nom mas sg	prudhomme
_	nom mas pl	prudhommes
$
prud'homme	S.()
_	nom mas sg	prud'homme
_	nom mas pl	prud'hommes
$
prudhommerie	S.()
_	nom fem sg	prudhommerie
_	nom fem pl	prudhommeries
$
prudhommesque	S.()
_	adj epi sg	prudhommesque
_	adj epi pl	prudhommesques
$
prudhommie	S.()
_	nom fem sg	prudhommie
_	nom fem pl	prudhommies
$














pruine	S.()
_	nom fem sg	pruine
_	nom fem pl	pruines
$
prune	S.()
_	nom epi sg	prune
_	nom epi pl	prunes
................................................................................
_	adj mas sg	ptérygoïdien
_	adj mas pl	ptérygoïdiens
$
ptérygote	S.()
_	nom mas sg	ptérygote
_	nom mas pl	ptérygotes
$
p'tite	F.()
_	nom adj fem sg	p'tite
_	nom adj fem pl	p'tites
_	nom adj mas sg	p'tit
_	nom adj mas pl	p'tits
$
ptolémaïque	S.()
_	adj epi sg	ptolémaïque
_	adj epi pl	ptolémaïques
$
ptoléméenne	F.()
_	adj fem sg	ptoléméenne
_	adj fem pl	ptoléméennes
................................................................................
_	nom fem sg	pyélonéphrite
_	nom fem pl	pyélonéphrites
$
pygargue	S.()
_	nom mas sg	pygargue
_	nom mas pl	pygargues
$




pygmée	S.()
_	nom adj epi sg	pygmée
_	nom adj epi pl	pygmées
$
pygméenne	F.()
_	adj fem sg	pygméenne
_	adj fem pl	pygméennes
................................................................................
_	nom fem sg	pyurie
_	nom fem pl	pyuries
$
pyxide	S.()
_	nom fem sg	pyxide
_	nom fem pl	pyxides
$






qanat	S.()
_	nom mas sg	qanat
_	nom mas pl	qanats
$
qat	S.()
_	nom mas sg	qat
_	nom mas pl	qats
................................................................................
_	nom fem sg	ratification
_	nom fem pl	ratifications
$
ratinage	S.()
_	nom mas sg	ratinage
_	nom mas pl	ratinages
$




rating	S.()
_	nom mas sg	rating
_	nom mas pl	ratings
$
ratio	S.()
_	nom mas sg	ratio
_	nom mas pl	ratios
................................................................................
_	nom mas sg	recéleur
_	nom mas pl	recéleurs
$
récence	S.()
_	nom fem sg	récence
_	nom fem pl	récences
$




recensement	S.()
_	nom mas sg	recensement
_	nom mas pl	recensements
$
recenseuse	F.()
_	nom fem sg	recenseuse
_	nom fem pl	recenseuses
................................................................................
$
recycleuse	F.()
_	nom adj fem sg	recycleuse
_	nom adj fem pl	recycleuses
_	nom adj mas sg	recycleur
_	nom adj mas pl	recycleurs
$




rédaction	S.()
_	nom fem sg	rédaction
_	nom fem pl	rédactions
$
rédactionnel	S.()
_	nom mas sg	rédactionnel
_	nom mas pl	rédactionnels
................................................................................
_	adj mas sg	surhumain
_	adj mas pl	surhumains
$
surhumanité	S.()
_	nom fem sg	surhumanité
_	nom fem pl	surhumanités
$




suricate	S.()
_	nom mas sg	suricate
_	nom mas pl	suricates
$
surie	F.()
_	adj fem sg	surie
_	adj fem pl	suries
................................................................................
$
surprenante	F.()
_	adj fem sg	surprenante
_	adj fem pl	surprenantes
_	adj mas sg	surprenant
_	adj mas pl	surprenants
$




surpresseur	S.()
_	nom mas sg	surpresseur
_	nom mas pl	surpresseurs
$
surpression	S.()
_	nom fem sg	surpression
_	nom fem pl	surpressions
................................................................................
_	nom mas sg	triton
_	nom mas pl	tritons
$
triturable	S.()
_	adj epi sg	triturable
_	adj epi pl	triturables
$




triturateur	S.()
_	nom mas sg	triturateur
_	nom mas pl	triturateurs
$
trituration	S.()
_	nom fem sg	trituration
_	nom fem pl	triturations
................................................................................
_	nom mas sg	troll
_	nom mas pl	trolls
$
trolle	S.()
_	nom fem sg	trolle
_	nom fem pl	trolles
$




trolley	S.()
_	nom mas sg	trolley
_	nom mas pl	trolleys
$
trombe	S.()
_	nom fem sg	trombe
_	nom fem pl	trombes
................................................................................
_	nom fem sg	vidéoprojection
_	nom fem pl	vidéoprojections
$
vidéoprotection	S.()
_	nom fem sg	vidéoprotection
_	nom fem pl	vidéoprotections
$




vide-ordure	S.()
_	nom mas sg	vide-ordure
_	nom mas pl	vide-ordures
$
vidéosphère	S.()
_	nom fem sg	vidéosphère
_	nom fem pl	vidéosphères
................................................................................
_	adj epi sg	vierge
_	adj epi pl	vierges
$
vierge	S.()
_	nom fem sg	vierge
_	nom fem pl	vierges
$




vietnamienne	F.()
_	nom adj fem sg	vietnamienne
_	nom adj fem pl	vietnamiennes
_	nom adj mas sg	vietnamien
_	nom adj mas pl	vietnamiens
$
vigésimale	W.()







>
>
>
>







 







>
>
>
>







 







<
<
<
<
<
<






<
<
<
<










>
>
>
>
>
>
>
>
>
>







 







>
>
>
>







 







<
<
<
<







 







>
>
>
>







 







>
>
>
>
>
>







 







|
|
|

|
|
|







 







|
|
|
|







 







>
>
>
>







 







>
>
>
>







 







>
>
>
>
>
>







 







|
|
|

|
|
|

|
|
|

|
|
|







 







<
<
<
<
<
<
<
<




>
>
>
>
>
>
>
>







 







<
<
<
<




>
>
>
>
|
|
|







 







>
>
>
>
>







 







>
>
>
>
>
>







 







>
>
>
>
>
>
>
>







 







>
>
>
>







 







>
>
>
>







 







>
>
>
>







 







|
|
|

|
|
|







 







<
<
<
<
<
<
<
<
<
<










<
<
<
<












>
>
>
>
>
>
>
>
>
>
>
>
>
>







 







<
<
<
<
<
<







 







>
>
>
>







 







>
>
>
>
>
>







 







>
>
>
>







 







>
>
>
>







 







>
>
>
>







 







>
>
>
>







 







>
>
>
>







 







>
>
>
>







 







>
>
>
>







 







>
>
>
>







 







>
>
>
>







20997
20998
20999
21000
21001
21002
21003
21004
21005
21006
21007
21008
21009
21010
21011
21012
21013
21014
.....
22542
22543
22544
22545
22546
22547
22548
22549
22550
22551
22552
22553
22554
22555
22556
22557
22558
22559
.....
23581
23582
23583
23584
23585
23586
23587






23588
23589
23590
23591
23592
23593




23594
23595
23596
23597
23598
23599
23600
23601
23602
23603
23604
23605
23606
23607
23608
23609
23610
23611
23612
23613
23614
23615
23616
23617
23618
23619
23620
.....
32443
32444
32445
32446
32447
32448
32449
32450
32451
32452
32453
32454
32455
32456
32457
32458
32459
32460
.....
47679
47680
47681
47682
47683
47684
47685




47686
47687
47688
47689
47690
47691
47692
.....
47769
47770
47771
47772
47773
47774
47775
47776
47777
47778
47779
47780
47781
47782
47783
47784
47785
47786
.....
70466
70467
70468
70469
70470
70471
70472
70473
70474
70475
70476
70477
70478
70479
70480
70481
70482
70483
70484
70485
.....
78436
78437
78438
78439
78440
78441
78442
78443
78444
78445
78446
78447
78448
78449
78450
78451
78452
78453
78454
78455
78456
.....
98474
98475
98476
98477
98478
98479
98480
98481
98482
98483
98484
98485
98486
98487
98488
98489
98490
98491
......
120182
120183
120184
120185
120186
120187
120188
120189
120190
120191
120192
120193
120194
120195
120196
120197
120198
120199
......
120493
120494
120495
120496
120497
120498
120499
120500
120501
120502
120503
120504
120505
120506
120507
120508
120509
120510
......
125478
125479
125480
125481
125482
125483
125484
125485
125486
125487
125488
125489
125490
125491
125492
125493
125494
125495
125496
125497
......
134199
134200
134201
134202
134203
134204
134205
134206
134207
134208
134209
134210
134211
134212
134213
134214
134215
134216
134217
134218
134219
134220
134221
134222
134223
134224
134225
134226
134227
......
135317
135318
135319
135320
135321
135322
135323








135324
135325
135326
135327
135328
135329
135330
135331
135332
135333
135334
135335
135336
135337
135338
135339
135340
135341
135342
......
146240
146241
146242
146243
146244
146245
146246




146247
146248
146249
146250
146251
146252
146253
146254
146255
146256
146257
146258
146259
146260
146261
146262
146263
146264
......
149218
149219
149220
149221
149222
149223
149224
149225
149226
149227
149228
149229
149230
149231
149232
149233
149234
149235
149236
......
151508
151509
151510
151511
151512
151513
151514
151515
151516
151517
151518
151519
151520
151521
151522
151523
151524
151525
151526
151527
......
163245
163246
163247
163248
163249
163250
163251
163252
163253
163254
163255
163256
163257
163258
163259
163260
163261
163262
163263
163264
163265
163266
......
169625
169626
169627
169628
169629
169630
169631
169632
169633
169634
169635
169636
169637
169638
169639
169640
169641
169642
......
183699
183700
183701
183702
183703
183704
183705
183706
183707
183708
183709
183710
183711
183712
183713
183714
183715
183716
......
192726
192727
192728
192729
192730
192731
192732
192733
192734
192735
192736
192737
192738
192739
192740
192741
192742
192743
......
192781
192782
192783
192784
192785
192786
192787
192788
192789
192790
192791
192792
192793
192794
192795
192796
192797
192798
192799
192800
192801
......
195959
195960
195961
195962
195963
195964
195965










195966
195967
195968
195969
195970
195971
195972
195973
195974
195975




195976
195977
195978
195979
195980
195981
195982
195983
195984
195985
195986
195987
195988
195989
195990
195991
195992
195993
195994
195995
195996
195997
195998
195999
196000
196001
196002
196003
196004
196005
196006
196007
196008
......
196902
196903
196904
196905
196906
196907
196908






196909
196910
196911
196912
196913
196914
196915
......
197824
197825
197826
197827
197828
197829
197830
197831
197832
197833
197834
197835
197836
197837
197838
197839
197840
197841
......
198294
198295
198296
198297
198298
198299
198300
198301
198302
198303
198304
198305
198306
198307
198308
198309
198310
198311
198312
198313
......
202030
202031
202032
202033
202034
202035
202036
202037
202038
202039
202040
202041
202042
202043
202044
202045
202046
202047
......
202879
202880
202881
202882
202883
202884
202885
202886
202887
202888
202889
202890
202891
202892
202893
202894
202895
202896
......
203929
203930
203931
203932
203933
203934
203935
203936
203937
203938
203939
203940
203941
203942
203943
203944
203945
203946
......
230653
230654
230655
230656
230657
230658
230659
230660
230661
230662
230663
230664
230665
230666
230667
230668
230669
230670
......
231006
231007
231008
231009
231010
231011
231012
231013
231014
231015
231016
231017
231018
231019
231020
231021
231022
231023
......
244324
244325
244326
244327
244328
244329
244330
244331
244332
244333
244334
244335
244336
244337
244338
244339
244340
244341
......
244522
244523
244524
244525
244526
244527
244528
244529
244530
244531
244532
244533
244534
244535
244536
244537
244538
244539
......
251469
251470
251471
251472
251473
251474
251475
251476
251477
251478
251479
251480
251481
251482
251483
251484
251485
251486
......
251619
251620
251621
251622
251623
251624
251625
251626
251627
251628
251629
251630
251631
251632
251633
251634
251635
251636
_	adj mas sg	autocollant
_	adj mas pl	autocollants
$
autocommutateur	S*()
_	nom mas sg	autocommutateur
_	nom mas pl	autocommutateurs
$
autocomplétion	S*()
_	nom fem sg	autocomplétion
_	nom fem pl	autocomplétions
$
autoconcurrence	S*()
_	nom fem sg	autoconcurrence
_	nom fem pl	autoconcurrences
$
autoconditionnement	S*()
_	nom mas sg	autoconditionnement
_	nom mas pl	autoconditionnements
................................................................................
_	nom mas sg	aviron
_	nom mas pl	avirons
$
avirulence	S*()
_	nom fem sg	avirulence
_	nom fem pl	avirulences
$
aviseur	S*()
_	nom mas sg	aviseur
_	nom mas pl	aviseurs
$
aviso	S*()
_	nom mas sg	aviso
_	nom mas pl	avisos
$
avitaillement	S*()
_	nom mas sg	avitaillement
_	nom mas pl	avitaillements
................................................................................
_	nom mas sg	baguier
_	nom mas pl	baguiers
$
baguiste	S.()
_	nom epi sg	baguiste
_	nom epi pl	baguistes
$






bahaïe	F.()
_	nom adj fem sg	bahaïe
_	nom adj fem pl	bahaïes
_	nom adj mas sg	bahaï
_	nom adj mas pl	bahaïs
$




bahaïsme	S.()
_	nom mas sg	bahaïsme
_	nom mas pl	bahaïsmes
$
bahamienne	F.()
_	nom adj fem sg	bahamienne
_	nom adj fem pl	bahamiennes
_	nom adj mas sg	bahamien
_	nom adj mas pl	bahamiens
$
baha’ie	F.()
_	nom adj fem sg	baha’ie
_	nom adj fem pl	baha’ies
_	nom adj mas sg	baha’i
_	nom adj mas pl	baha’is
$
baha’isme	S.()
_	nom mas sg	baha’isme
_	nom mas pl	baha’ismes
$
bahreïnie	F.()
_	nom adj fem sg	bahreïnie
_	nom adj fem pl	bahreïnies
_	nom adj mas sg	bahreïni
_	nom adj mas pl	bahreïnis
$
baht	S.()
................................................................................
_	nom fem sg	bouillasse
_	nom fem pl	bouillasses
$
bouille	S.()
_	nom fem sg	bouille
_	nom fem pl	bouilles
$
bouillette	S.()
_	nom fem sg	bouillette
_	nom fem pl	bouillettes
$
bouilleuse	F.()
_	nom fem sg	bouilleuse
_	nom fem pl	bouilleuses
_	nom mas sg	bouilleur
_	nom mas pl	bouilleurs
$
bouillie	S.()
................................................................................
_	nom adj epi sg	chti
_	nom adj epi pl	chtis
$
chtimi	S.()
_	nom adj epi sg	chtimi
_	nom adj epi pl	chtimis
$




chtonienne	F.()
_	adj fem sg	chtonienne
_	adj fem pl	chtoniennes
_	adj mas sg	chtonien
_	adj mas pl	chtoniens
$
chtouille	S.()
................................................................................
_	adj mas sg	chypré
_	adj mas pl	chyprés
$
chypriote	S.()
_	nom adj epi sg	chypriote
_	nom adj epi pl	chypriotes
$
ch’timi	S.()
_	nom adj epi sg	ch’timi
_	nom adj epi pl	ch’timis
$
ciabatta	S.()
_	nom fem sg	ciabatta
_	nom fem pl	ciabattas
$
cibiche	S.()
_	nom fem sg	cibiche
_	nom fem pl	cibiches
................................................................................
_	nom mas sg	dépolissage
_	nom mas pl	dépolissages
$
dépolitisation	S.()
_	nom fem sg	dépolitisation
_	nom fem pl	dépolitisations
$
dépolluante	F.()
_	adj fem sg	dépolluante
_	adj fem pl	dépolluantes
_	adj mas sg	dépolluant
_	adj mas pl	dépolluants
$
dépollution	S.()
_	nom fem sg	dépollution
_	nom fem pl	dépollutions
$
dépolymérisation	S.()
_	nom fem sg	dépolymérisation
_	nom fem pl	dépolymérisations
................................................................................
_	nom epi sg	droguiste
_	nom epi pl	droguistes
$
droïde	S.()
_	nom mas sg	droïde
_	nom mas pl	droïdes
$
droit-de-lhommisme	S.()
_	nom mas sg	droit-de-lhommisme
_	nom mas pl	droit-de-lhommismes
$
droit-de-lhommiste	S.()
_	nom adj epi sg	droit-de-lhommiste
_	nom adj epi pl	droit-de-lhommistes
$
droite	F.()
_	nom adj fem sg	droite
_	nom adj fem pl	droites
_	nom adj mas sg	droit
_	nom adj mas pl	droits
$
................................................................................
_	nom fem pl	finances
$
financement	S.()
_	nom mas sg	financement
_	nom mas pl	financements
$
financeuse	F.()
_	nom adj fem sg	financeuse
_	nom adj fem pl	financeuses
_	nom adj mas sg	financeur
_	nom adj mas pl	financeurs
$
financiarisation	S.()
_	nom fem sg	financiarisation
_	nom fem pl	financiarisations
$
financiarisme	S.()
_	nom mas sg	financiarisme
................................................................................
_	adj epi sg	hyperchrome
_	adj epi pl	hyperchromes
$
hyperchromie	S*()
_	nom fem sg	hyperchromie
_	nom fem pl	hyperchromies
$
hypercoagulabilité	S*()
_	nom fem sg	hypercoagulabilité
_	nom fem pl	hypercoagulabilités
$
hypercomplexe	S*()
_	adj epi sg	hypercomplexe
_	adj epi pl	hypercomplexes
$
hyperconformisme	S*()
_	nom mas sg	hyperconformisme
_	nom mas pl	hyperconformismes
................................................................................
_	adj epi sg	hyperonymique
_	adj epi pl	hyperonymiques
$
hyperostose	S*()
_	nom fem sg	hyperostose
_	nom fem pl	hyperostoses
$
hyperoxie	S*()
_	nom fem sg	hyperoxie
_	nom fem pl	hyperoxies
$
hyperparasite	S*()
_	nom mas sg	hyperparasite
_	nom mas pl	hyperparasites
$
hyperparathyroïdie	S*()
_	nom fem sg	hyperparathyroïdie
_	nom fem pl	hyperparathyroïdies
................................................................................
_	adj epi sg	increvable
_	adj epi pl	increvables
$
incriminable	S*()
_	adj epi sg	incriminable
_	adj epi pl	incriminables
$
incriminante	F*()
_	adj fem sg	incriminante
_	adj fem pl	incriminantes
_	adj mas sg	incriminant
_	adj mas pl	incriminants
$
incrimination	S*()
_	nom fem sg	incrimination
_	nom fem pl	incriminations
$
incristallisable	S*()
_	adj epi sg	incristallisable
_	adj epi pl	incristallisables
................................................................................
_	nom mas sg	jéjuno-iléon
_	nom mas pl	jéjuno-iléons
$
jéjunum	S.()
_	nom mas sg	jéjunum
_	nom mas pl	jéjunums
$
je-men-fichisme	S.()
_	nom mas sg	je-men-fichisme
_	nom mas pl	je-men-fichismes
$
je-men-fichiste	S.()
_	nom adj epi sg	je-men-fichiste
_	nom adj epi pl	je-men-fichistes
$
je-men-foutisme	S.()
_	nom mas sg	je-men-foutisme
_	nom mas pl	je-men-foutismes
$
je-men-foutiste	S.()
_	nom adj epi sg	je-men-foutiste
_	nom adj epi pl	je-men-foutistes
$
jennérienne	F.()
_	adj fem sg	jennérienne
_	adj fem pl	jennériennes
_	adj mas sg	jennérien
_	adj mas pl	jennériens
$
................................................................................
_	nom mas sg	jusnaturalisme
_	nom mas pl	jusnaturalismes
$
jusnaturaliste	S.()
_	nom adj epi sg	jusnaturaliste
_	nom adj epi pl	jusnaturalistes
$








jusquiame	S.()
_	nom fem sg	jusquiame
_	nom fem pl	jusquiames
$
jusqu’au-boutisme	S.()
_	nom mas sg	jusqu’au-boutisme
_	nom mas pl	jusqu’au-boutismes
$
jusqu’au-boutiste	S.()
_	nom epi sg	jusqu’au-boutiste
_	nom epi pl	jusqu’au-boutistes
$
jussiée	S.()
_	nom fem sg	jussiée
_	nom fem pl	jussiées
$
jussion	S.()
_	nom fem sg	jussion
_	nom fem pl	jussions
................................................................................
_	nom mas sg	mammouth
_	nom mas pl	mammouths
$
mammy	S.()
_	nom fem sg	mammy
_	nom fem pl	mammys
$




mamy	S.()
_	nom fem sg	mamy
_	nom fem pl	mamys
$
mam’selle	S.()
_	nom fem sg	mam’selle
_	nom fem pl	mam’selles
$
mamzelle	S.()
_	nom fem sg	mamzelle
_	nom fem pl	mamzelles
$
man	S.()
_	nom mas sg	man
_	nom mas pl	mans
$
mana	S.()
_	nom mas sg	mana
................................................................................
_	nom mas sg	méaculpa
_	nom mas pl	méaculpas
$
méandre	S.()
_	nom mas sg	méandre
_	nom mas pl	méandres
$
méandreuse	W.()
_	adj fem sg	méandreuse
_	adj fem pl	méandreuses
_	adj mas inv	méandreux
$
méandriforme	S.()
_	adj epi sg	méandriforme
_	adj epi pl	méandriformes
$
méandrine	S.()
_	nom fem sg	méandrine
_	nom fem pl	méandrines
................................................................................
_	adj mas sg	mesmérien
_	adj mas pl	mesmériens
$
mesmérisme	S.()
_	nom mas sg	mesmérisme
_	nom mas pl	mesmérismes
$
mésoaméricaine	F.()
_	nom adj fem sg	mésoaméricaine
_	nom adj fem pl	mésoaméricaines
_	nom adj mas sg	mésoaméricain
_	nom adj mas pl	mésoaméricains
$
mésoblaste	S.()
_	nom mas sg	mésoblaste
_	nom mas pl	mésoblastes
$
mésoblastique	S.()
_	adj epi sg	mésoblastique
_	adj epi pl	mésoblastiques
................................................................................
_	nom epi sg	neuroanatomiste
_	nom epi pl	neuroanatomistes
$
neuro-anatomiste	S.()
_	nom epi sg	neuro-anatomiste
_	nom epi pl	neuro-anatomistes
$
neuroatypique	S.()
_	adj epi sg	neuroatypique
_	adj epi pl	neuroatypiques
$
neuro-atypique	S.()
_	adj epi sg	neuro-atypique
_	adj epi pl	neuro-atypiques
$
neurobiochimie	S.()
_	nom fem sg	neurobiochimie
_	nom fem pl	neurobiochimies
$
neurobiochimique	S.()
_	adj epi sg	neurobiochimique
_	adj epi pl	neurobiochimiques
................................................................................
_	nom fem sg	orchidacée
_	nom fem pl	orchidacées
$
orchidée	S*()
_	nom fem sg	orchidée
_	nom fem pl	orchidées
$
orchidologie	S*()
_	nom fem sg	orchidologie
_	nom fem pl	orchidologies
$
orchiépididymite	S*()
_	nom fem sg	orchiépididymite
_	nom fem pl	orchiépididymites
$
orchi-épididymite	S*()
_	nom fem sg	orchi-épididymite
_	nom fem pl	orchi-épididymites
................................................................................
_	nom mas pl	piémonts
$
piémontaise	F.()
_	nom adj fem sg	piémontaise
_	nom adj fem pl	piémontaises
_	nom adj mas inv	piémontais
$
pier	S.()
_	nom mas sg	pier
_	nom mas pl	piers
$
piercing	S.()
_	nom mas sg	piercing
_	nom mas pl	piercings
$
piéride	S.()
_	nom fem sg	piéride
_	nom fem pl	piérides
................................................................................
_	nom fem sg	présidentialisation
_	nom fem pl	présidentialisations
$
présidentialisme	S.()
_	nom mas sg	présidentialisme
_	nom mas pl	présidentialismes
$
présidentialiste	S.()
_	nom adj epi sg	présidentialiste
_	nom adj epi pl	présidentialistes
$
présidentielle	F.()
_	adj fem sg	présidentielle
_	adj fem pl	présidentielles
_	adj mas sg	présidentiel
_	adj mas pl	présidentiels
$
présidial	X.()
................................................................................
_	nom adj fem pl	présomptueuses
_	nom adj mas inv	présomptueux
$
présonorisation	S.()
_	nom fem sg	présonorisation
_	nom fem pl	présonorisations
$
presquile	S.()
_	nom fem sg	presquile
_	nom fem pl	presquiles
$
presquîle	S.()
_	nom fem sg	presquîle
_	nom fem pl	presquîles
$
pressabilité	S.()
_	nom fem sg	pressabilité
_	nom fem pl	pressabilités
$
pressage	S.()
_	nom mas sg	pressage
................................................................................
_	adj mas sg	prudentiel
_	adj mas pl	prudentiels
$
pruderie	S.()
_	nom fem sg	pruderie
_	nom fem pl	pruderies
$










prudhommale	W.()
_	adj fem sg	prudhommale
_	adj fem pl	prudhommales
_	adj mas sg	prudhommal
_	adj mas pl	prudhommaux
$
prudhomme	S.()
_	nom mas sg	prudhomme
_	nom mas pl	prudhommes
$




prudhommerie	S.()
_	nom fem sg	prudhommerie
_	nom fem pl	prudhommeries
$
prudhommesque	S.()
_	adj epi sg	prudhommesque
_	adj epi pl	prudhommesques
$
prudhommie	S.()
_	nom fem sg	prudhommie
_	nom fem pl	prudhommies
$
prud’homale	W.()
_	adj fem sg	prud’homale
_	adj fem pl	prud’homales
_	adj mas sg	prud’homal
_	adj mas pl	prud’homaux
$
prud’homie	S.()
_	nom fem sg	prud’homie
_	nom fem pl	prud’homies
$
prud’homme	S.()
_	nom mas sg	prud’homme
_	nom mas pl	prud’hommes
$
pruine	S.()
_	nom fem sg	pruine
_	nom fem pl	pruines
$
prune	S.()
_	nom epi sg	prune
_	nom epi pl	prunes
................................................................................
_	adj mas sg	ptérygoïdien
_	adj mas pl	ptérygoïdiens
$
ptérygote	S.()
_	nom mas sg	ptérygote
_	nom mas pl	ptérygotes
$






ptolémaïque	S.()
_	adj epi sg	ptolémaïque
_	adj epi pl	ptolémaïques
$
ptoléméenne	F.()
_	adj fem sg	ptoléméenne
_	adj fem pl	ptoléméennes
................................................................................
_	nom fem sg	pyélonéphrite
_	nom fem pl	pyélonéphrites
$
pygargue	S.()
_	nom mas sg	pygargue
_	nom mas pl	pygargues
$
pygmalionisme	S.()
_	nom mas sg	pygmalionisme
_	nom mas pl	pygmalionismes
$
pygmée	S.()
_	nom adj epi sg	pygmée
_	nom adj epi pl	pygmées
$
pygméenne	F.()
_	adj fem sg	pygméenne
_	adj fem pl	pygméennes
................................................................................
_	nom fem sg	pyurie
_	nom fem pl	pyuries
$
pyxide	S.()
_	nom fem sg	pyxide
_	nom fem pl	pyxides
$
p’tite	F.()
_	nom adj fem sg	p’tite
_	nom adj fem pl	p’tites
_	nom adj mas sg	p’tit
_	nom adj mas pl	p’tits
$
qanat	S.()
_	nom mas sg	qanat
_	nom mas pl	qanats
$
qat	S.()
_	nom mas sg	qat
_	nom mas pl	qats
................................................................................
_	nom fem sg	ratification
_	nom fem pl	ratifications
$
ratinage	S.()
_	nom mas sg	ratinage
_	nom mas pl	ratinages
$
ratine	S.()
_	nom fem sg	ratine
_	nom fem pl	ratines
$
rating	S.()
_	nom mas sg	rating
_	nom mas pl	ratings
$
ratio	S.()
_	nom mas sg	ratio
_	nom mas pl	ratios
................................................................................
_	nom mas sg	recéleur
_	nom mas pl	recéleurs
$
récence	S.()
_	nom fem sg	récence
_	nom fem pl	récences
$
recensable	S.()
_	adj epi sg	recensable
_	adj epi pl	recensables
$
recensement	S.()
_	nom mas sg	recensement
_	nom mas pl	recensements
$
recenseuse	F.()
_	nom fem sg	recenseuse
_	nom fem pl	recenseuses
................................................................................
$
recycleuse	F.()
_	nom adj fem sg	recycleuse
_	nom adj fem pl	recycleuses
_	nom adj mas sg	recycleur
_	nom adj mas pl	recycleurs
$
rédac-chef	S.()
_	nom epi sg	rédac-chef
_	nom epi pl	rédac-chefs
$
rédaction	S.()
_	nom fem sg	rédaction
_	nom fem pl	rédactions
$
rédactionnel	S.()
_	nom mas sg	rédactionnel
_	nom mas pl	rédactionnels
................................................................................
_	adj mas sg	surhumain
_	adj mas pl	surhumains
$
surhumanité	S.()
_	nom fem sg	surhumanité
_	nom fem pl	surhumanités
$
surhydratation	S.()
_	nom fem sg	surhydratation
_	nom fem pl	surhydratations
$
suricate	S.()
_	nom mas sg	suricate
_	nom mas pl	suricates
$
surie	F.()
_	adj fem sg	surie
_	adj fem pl	suries
................................................................................
$
surprenante	F.()
_	adj fem sg	surprenante
_	adj fem pl	surprenantes
_	adj mas sg	surprenant
_	adj mas pl	surprenants
$
surprescription	S.()
_	nom fem sg	surprescription
_	nom fem pl	surprescriptions
$
surpresseur	S.()
_	nom mas sg	surpresseur
_	nom mas pl	surpresseurs
$
surpression	S.()
_	nom fem sg	surpression
_	nom fem pl	surpressions
................................................................................
_	nom mas sg	triton
_	nom mas pl	tritons
$
triturable	S.()
_	adj epi sg	triturable
_	adj epi pl	triturables
$
triturage	S.()
_	nom mas sg	triturage
_	nom mas pl	triturages
$
triturateur	S.()
_	nom mas sg	triturateur
_	nom mas pl	triturateurs
$
trituration	S.()
_	nom fem sg	trituration
_	nom fem pl	triturations
................................................................................
_	nom mas sg	troll
_	nom mas pl	trolls
$
trolle	S.()
_	nom fem sg	trolle
_	nom fem pl	trolles
$
trollesque	S.()
_	adj epi sg	trollesque
_	adj epi pl	trollesques
$
trolley	S.()
_	nom mas sg	trolley
_	nom mas pl	trolleys
$
trombe	S.()
_	nom fem sg	trombe
_	nom fem pl	trombes
................................................................................
_	nom fem sg	vidéoprojection
_	nom fem pl	vidéoprojections
$
vidéoprotection	S.()
_	nom fem sg	vidéoprotection
_	nom fem pl	vidéoprotections
$
vidéo-protection	S.()
_	nom fem sg	vidéo-protection
_	nom fem pl	vidéo-protections
$
vide-ordure	S.()
_	nom mas sg	vide-ordure
_	nom mas pl	vide-ordures
$
vidéosphère	S.()
_	nom fem sg	vidéosphère
_	nom fem pl	vidéosphères
................................................................................
_	adj epi sg	vierge
_	adj epi pl	vierges
$
vierge	S.()
_	nom fem sg	vierge
_	nom fem pl	vierges
$
viétique	S.()
_	adj epi sg	viétique
_	adj epi pl	viétiques
$
vietnamienne	F.()
_	nom adj fem sg	vietnamienne
_	nom adj fem pl	vietnamiennes
_	nom adj mas sg	vietnamien
_	nom adj mas pl	vietnamiens
$
vigésimale	W.()

Modified gc_lang/fr/dictionnaire/genfrdic.py from [4630ffd3ad] to [1dab8fa65c].

294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
...
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
...
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
...
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
...
922
923
924
925
926
927
928



929
930
931
932
933
934


935
936
937
938
939
940
941
942
943
....
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
....
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
....
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
....
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
....
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
                hDst.write("  > {0[1]:>8} : {0[0]}\n".format(elem))

    def writeDictionary (self, spDst, dTplVars, nMode, bSimplified):
        "Écrire le fichier dictionnaire (.dic)"
        echo(' * Dictionnaire >> [ {}.dic ] ({})'.format(dTplVars['asciiName'], dTplVars['subDicts']))
        nEntry = 0
        for oEntry in self.lEntry:
            if oEntry.di in dTplVars['subDicts']:
                nEntry += 1
        with open(spDst+'/'+dTplVars['asciiName']+'.dic', 'w', encoding='utf-8', newline="\n") as hDst:
            hDst.write(str(nEntry)+"\n")
            for oEntry in self.lEntry:
                if oEntry.di in dTplVars['subDicts']:
                    hDst.write(oEntry.getEntryLine(self, nMode, bSimplified))

    def writeAffixes (self, spDst, dTplVars, nMode, bSimplified):
        "Écrire le fichier des affixes (.aff)"
        echo(' * Dictionnaire >> [ {}.aff ]'.format(dTplVars['asciiName']))
        info = "# This Source Code Form is subject to the terms of the Mozilla Public\n" + \
               "# License, v. 2.0. If a copy of the MPL was not distributed with this\n" + \
               "# file, You can obtain one at http://mozilla.org/MPL/2.0/.\n\n" + \
................................................................................
        self.iD = '0'

        # autres
        self.comment = ''
        self.err = ''
        self.nFlexions = 0
        self.lFlexions = []
        self.sRadical = ''
        self.nOccur = 0
        self.nAKO = -1   # Average known occurrences
        self.fFreq = 0
        self.oldFq = ''

        sLine = sLine.rstrip(" \n")
        # commentaire
        if '#' in sLine:
            sLine, comment = sLine.split('#', 1)
            self.comment = comment.strip()
        # éléments de la ligne
        elems = sLine.split()
        nElems = len(elems)
        # lemme et drapeaux
        firstElems = elems[0].split('/')
        self.lemma = firstElems[0]
        self.flags = firstElems[1]  if len(firstElems) > 1  else ''
        # morph
        for i in range(1, nElems):
................................................................................
        if re.search(r"\s$", self.lemma):
            sErr += 'espace en fin de lemme'
        if re.match(r"v[0123]", self.po) and not re.match(r"[eas_][ix_][tx_][nx_][pqreuvx_][mx_][ex_z][ax_z]\b", self.po[2:]):
            sErr += 'verbe inconnu: ' + self.po
        if (re.match(r"S[*.]", self.flags) and re.search("[sxz]$", self.lemma)) or (re.match(r"X[*.]", self.flags) and not re.search("[ul]$", self.lemma)):
            sErr += 'drapeau inutile'
        if self.iz == '' and re.match(r"[SXAI](?!=)", self.flags) and self.po:
            sErr += '[is]'
        if re.match(r"pl|sg|inv", self.iz):
            sErr += '[is]'
        if re.match(r"[FW]", self.flags) and re.search(r"epi|mas|fem|inv|sg|pl", self.iz):
            sErr += '[is]'
        if re.match(r"[FW]", self.flags) and re.search(r"[^eë]$", self.lemma):
            sErr += "fin de lemme inapproprié"
        if re.match(r".\*", self.flags) and re.match(r"[bcdfgjklmnpqrstvwxz]", self.lemma):
            sErr += 'drapeau pour lemme commençant par une voyelle'
        if re.search(r"pl|sg|inv", self.iz) and re.match(r"[SXAIFW](?!=)", self.flags):
            sErr += '[is]'
        if re.search(r"nom|adj", self.po) and re.match(r"(?i)[aâàäáeéèêëiîïíìoôöóòuûüúù]", self.lemma) and re.match("[SFWXAI][.]", self.flags) \
           and "pel" not in self.lx:
            sErr += 'le drapeau derait finir avec *'
        if not self.flags and self.iz.endswith(("mas", "fem", "epi")):
            sErr += '[is] incomplet'
        if self.flags.startswith(("a", "b", "c", "d")) and not self.lemma.endswith("er"):
            sErr += "drapeau pour verbe du 1ᵉʳ groupe sur un lemme non conforme"
................................................................................

    def keyTriNat (self):
        return (self.lemma.translate(CHARMAP), self.flags, self.po)

    def keyTriNum (self):
        return (self.lemma, self.flags, self.po)

    def getEntryLine (self, oDict, nMode, bSimplified=False):
        sLine = self.lemma
        if self.flags:
            sLine += '/'
            sLine += self.flags  if not oDict.bShortenTags or bSimplified  else oDict.dAF[self.flags]
        if bSimplified:
            return sLine.replace("()", "") + "\n"
        if nMode > 0:
            sMorph = self.getMorph(nMode)
................................................................................
                if not sMorph.endswith((" mas", " fem", " epi")):
                    self.lFlexions.append( Flexion(self, sFlex, sMorph, sDic) )
                    self.nFlexions += 1
                else:
                    #echo(sFlex + " " + sMorph + ", ")
                    pass
        # Drapeaux dont le lemme féminin doit être remplacé par le masculin dans la gestion des formes fléchies



        if self.flags.startswith(("F.", "F*", "W.", "W*")):
            # recherche de la forme masculine
            for t in lTuples:
                sMorph = self.clean(t[1])
                if sMorph.endswith('mas') or sMorph.endswith('mas sg') or sMorph.endswith('mas inv'):
                    self.sRadical = t[0]


        else:
            self.sRadical = self.lemma
        # Tag duplicates
        d = {}
        for oFlex in self.lFlexions:
            d[oFlex.sFlexion] = d.get(oFlex.sFlexion, 0) + 1
        for oFlex in self.lFlexions:
            oFlex.nDup = d[oFlex.sFlexion]

................................................................................
            sOccurs += t[1] + "\t"
        return "id\tFlexion\tLemme\tÉtiquettes\tMétagraphe (β)\tMetaphone2\tNotes\tSémantique\tÉtymologie\tSous-dictionnaire\t" + sOccurs + "Total occurrences\tDoublons\tMultiples\tFréquence\tIndice de fréquence\n"

    def __str__ (self, oStatsLex):
        sOccurs = ''
        for v in oStatsLex.dFlexions[self.sFlexion]:
            sOccurs += str(v) + "\t"
        return "{0.oEntry.iD}\t{0.sFlexion}\t{0.oEntry.sRadical}\t{0.sMorph}\t{0.metagfx}\t{0.metaph2}\t{0.oEntry.lx}\t{0.oEntry.se}\t{0.oEntry.et}\t{0.oEntry.di}{2}\t{1}{0.nOccur}\t{0.nDup}\t{0.nMulti}\t{0.fFreq:.15f}\t{0.cFq}\n".format(self, sOccurs, "/"+self.cDic if self.cDic != "*" else "")

    @classmethod
    def simpleHeader (cls):
        return "# :POS ;LEX ~SEM =FQ /DIC\n"

    def getGrammarCheckerRepr (self):
        return "{0.sFlexion}\t{0.oEntry.lemma}\t{1}\n".format(self, self._getSimpleTags())
................................................................................
        "ipre": ":Ip", "iimp": ":Iq", "ipsi": ":Is", "ifut": ":If",
        "spre": ":Sp", "simp": ":Sq", "cond": ":K", "impe": ":E",
        "1sg": ":1s", "1isg": ":1ś", "1jsg": ":1ŝ", "2sg": ":2s", "3sg": ":3s", "1pl": ":1p", "2pl": ":2p", "3pl": ":3p", "3pl!": ":3p!",
        "prepv": ":Rv", "prep": ":R", "loc.prep": ":Ŕ",
        "detpos": ":Dp", "detdem": ":Dd", "detind": ":Di", "detneg": ":Dn", "detex": ":De", "det": ":D",
        "advint": ":U",
        "prodem": ":Od", "proind": ":Oi", "proint": ":Ot", "proneg": ":On", "prorel": ":Or", "proadv": ":Ow",
        "properobj": ":Oo", "propersuj": ":Os", "1pe": ":O1", "2pe": ":O2", "3pe": ":O3",
        "cjco": ":Cc", "cjsub": ":Cs", "cj": ":C", "loc.cj": ":Ĉ", "loc.cjsub": ":Ĉs",
        "prn": ":M1", "patr": ":M2", "loc.patr": ":Ḿ2", "npr": ":MP", "nompr": ":NM",
        "pfx": ":Zp", "sfx": ":Zs",
        "div": ":H",
        "err": ":#",
        # LEX
        "symb": ";S"
................................................................................
            s += "/" + self.oEntry.di
        return s

    def keyTriNat (self):
        return (self.sFlexion.translate(CHARMAP), self.sMorph)

    def keyFreq (self):
        return (100-self.fFreq, self.oEntry.sRadical, self.sFlexion)

    def keyOcc (self):
        return (self.nOccur, self.oEntry.sRadical, self.sFlexion)

    def keyIdx (self):
        return self.oEntry.iD

    def keyFlexion (self):
        return self.sFlexion

................................................................................
                hDst.write(str(t)+"\n")
            for e in self.dFlexions.items():
                hDst.write("{} - {}\n".format(e[0], e[1]))



def main ():

    xParser = argparse.ArgumentParser()
    xParser.add_argument("-v", "--verdic", help="set dictionary version, i.e. 5.4", type=str, default="X.Y.z")
    xParser.add_argument("-m", "--mode", help="0: no tags,  1: Hunspell tags (default),  2: All tags", type=int, choices=[0, 1, 2], default=1)
    xParser.add_argument("-u", "--uncompress", help="do not use Hunspell compression", action="store_true")
    xParser.add_argument("-s", "--simplify", help="no virtual lemmas", action="store_true")
    xParser.add_argument("-sv", "--spellvariants", help="generate spell variants", action="store_true")
    xParser.add_argument("-gl", "--grammalecte", help="copy generated files to Grammalecte folders", action="store_true")
................................................................................
    oFrenchDict.calculateStats(oStatsLex, spfStats)

    ### écriture des paquets
    echo("Création des paquets...")

    spLexiconDestGL = "../../../lexicons"  if xArgs.grammalecte  else ""
    spLibreOfficeExtDestGL = "../oxt/Dictionnaires/dictionaries"  if xArgs.grammalecte  else ""
    spMozillaExtDestGL = ""  # les dictionnaires pour Hunspell ne sont plus utilisés pour l’instant dans Firefox / Thunderbird
    spDataDestGL = "../data"  if xArgs.grammalecte  else ""

    if not xArgs.uncompress:
        oFrenchDict.defineAbreviatedTags(xArgs.mode, spfStats)
    oFrenchDict.createFiles(spBuild, [dMODERNE, dTOUTESVAR, dCLASSIQUE, dREFORME1990], xArgs.mode, xArgs.simplify)
    oFrenchDict.createLexiconPackages(spBuild, xArgs.verdic, oStatsLex, spLexiconDestGL)
    oFrenchDict.createFileIfqForDB(spBuild)







|




|
|







 







|











|







 







|

|

|





|







 







|
|







 







>
>
>
|
|
|
|
<
<
>
>
|
|







 







|







 







|







 







|


|







 







<







 







|







294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
...
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
...
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
...
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
...
922
923
924
925
926
927
928
929
930
931
932
933
934
935


936
937
938
939
940
941
942
943
944
945
946
....
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
....
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
....
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
....
1500
1501
1502
1503
1504
1505
1506

1507
1508
1509
1510
1511
1512
1513
....
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
                hDst.write("  > {0[1]:>8} : {0[0]}\n".format(elem))

    def writeDictionary (self, spDst, dTplVars, nMode, bSimplified):
        "Écrire le fichier dictionnaire (.dic)"
        echo(' * Dictionnaire >> [ {}.dic ] ({})'.format(dTplVars['asciiName'], dTplVars['subDicts']))
        nEntry = 0
        for oEntry in self.lEntry:
            if oEntry.di in dTplVars['subDicts'] and " " not in oEntry.lemma:
                nEntry += 1
        with open(spDst+'/'+dTplVars['asciiName']+'.dic', 'w', encoding='utf-8', newline="\n") as hDst:
            hDst.write(str(nEntry)+"\n")
            for oEntry in self.lEntry:
                if oEntry.di in dTplVars['subDicts'] and " " not in oEntry.lemma:
                    hDst.write(oEntry.getHunspellLine(self, nMode, bSimplified))

    def writeAffixes (self, spDst, dTplVars, nMode, bSimplified):
        "Écrire le fichier des affixes (.aff)"
        echo(' * Dictionnaire >> [ {}.aff ]'.format(dTplVars['asciiName']))
        info = "# This Source Code Form is subject to the terms of the Mozilla Public\n" + \
               "# License, v. 2.0. If a copy of the MPL was not distributed with this\n" + \
               "# file, You can obtain one at http://mozilla.org/MPL/2.0/.\n\n" + \
................................................................................
        self.iD = '0'

        # autres
        self.comment = ''
        self.err = ''
        self.nFlexions = 0
        self.lFlexions = []
        self.sStem = ''
        self.nOccur = 0
        self.nAKO = -1   # Average known occurrences
        self.fFreq = 0
        self.oldFq = ''

        sLine = sLine.rstrip(" \n")
        # commentaire
        if '#' in sLine:
            sLine, comment = sLine.split('#', 1)
            self.comment = comment.strip()
        # éléments de la ligne
        elems = sLine.split("\t")
        nElems = len(elems)
        # lemme et drapeaux
        firstElems = elems[0].split('/')
        self.lemma = firstElems[0]
        self.flags = firstElems[1]  if len(firstElems) > 1  else ''
        # morph
        for i in range(1, nElems):
................................................................................
        if re.search(r"\s$", self.lemma):
            sErr += 'espace en fin de lemme'
        if re.match(r"v[0123]", self.po) and not re.match(r"[eas_][ix_][tx_][nx_][pqreuvx_][mx_][ex_z][ax_z]\b", self.po[2:]):
            sErr += 'verbe inconnu: ' + self.po
        if (re.match(r"S[*.]", self.flags) and re.search("[sxz]$", self.lemma)) or (re.match(r"X[*.]", self.flags) and not re.search("[ul]$", self.lemma)):
            sErr += 'drapeau inutile'
        if self.iz == '' and re.match(r"[SXAI](?!=)", self.flags) and self.po:
            sErr += '[is] vide'
        if re.match(r"pl|sg|inv", self.iz):
            sErr += '[is] incomplet'
        if re.match(r"[FW]", self.flags) and re.search(r"epi|mas|fem|inv|sg|pl", self.iz):
            sErr += '[is] incohérent'
        if re.match(r"[FW]", self.flags) and re.search(r"[^eë]$", self.lemma):
            sErr += "fin de lemme inapproprié"
        if re.match(r".\*", self.flags) and re.match(r"[bcdfgjklmnpqrstvwxz]", self.lemma):
            sErr += 'drapeau pour lemme commençant par une voyelle'
        if re.search(r"pl|sg|inv", self.iz) and re.match(r"[SXAIFW](?!=)", self.flags):
            sErr += '[is] incohérent'
        if re.search(r"nom|adj", self.po) and re.match(r"(?i)[aâàäáeéèêëiîïíìoôöóòuûüúù]", self.lemma) and re.match("[SFWXAI][.]", self.flags) \
           and "pel" not in self.lx:
            sErr += 'le drapeau derait finir avec *'
        if not self.flags and self.iz.endswith(("mas", "fem", "epi")):
            sErr += '[is] incomplet'
        if self.flags.startswith(("a", "b", "c", "d")) and not self.lemma.endswith("er"):
            sErr += "drapeau pour verbe du 1ᵉʳ groupe sur un lemme non conforme"
................................................................................

    def keyTriNat (self):
        return (self.lemma.translate(CHARMAP), self.flags, self.po)

    def keyTriNum (self):
        return (self.lemma, self.flags, self.po)

    def getHunspellLine (self, oDict, nMode, bSimplified=False):
        sLine = self.lemma.replace("’", "'")
        if self.flags:
            sLine += '/'
            sLine += self.flags  if not oDict.bShortenTags or bSimplified  else oDict.dAF[self.flags]
        if bSimplified:
            return sLine.replace("()", "") + "\n"
        if nMode > 0:
            sMorph = self.getMorph(nMode)
................................................................................
                if not sMorph.endswith((" mas", " fem", " epi")):
                    self.lFlexions.append( Flexion(self, sFlex, sMorph, sDic) )
                    self.nFlexions += 1
                else:
                    #echo(sFlex + " " + sMorph + ", ")
                    pass
        # Drapeaux dont le lemme féminin doit être remplacé par le masculin dans la gestion des formes fléchies
        if self.st:
            self.sStem = self.st
        else:
            if self.flags.startswith(("F.", "F*", "W.", "W*")):
                # recherche de la forme masculine
                for t in lTuples:
                    sMorph = self.clean(t[1])


                    if sMorph.endswith(('mas', 'mas sg', 'mas inv')):
                        self.sStem = t[0]
            else:
                self.sStem = self.lemma
        # Tag duplicates
        d = {}
        for oFlex in self.lFlexions:
            d[oFlex.sFlexion] = d.get(oFlex.sFlexion, 0) + 1
        for oFlex in self.lFlexions:
            oFlex.nDup = d[oFlex.sFlexion]

................................................................................
            sOccurs += t[1] + "\t"
        return "id\tFlexion\tLemme\tÉtiquettes\tMétagraphe (β)\tMetaphone2\tNotes\tSémantique\tÉtymologie\tSous-dictionnaire\t" + sOccurs + "Total occurrences\tDoublons\tMultiples\tFréquence\tIndice de fréquence\n"

    def __str__ (self, oStatsLex):
        sOccurs = ''
        for v in oStatsLex.dFlexions[self.sFlexion]:
            sOccurs += str(v) + "\t"
        return "{0.oEntry.iD}\t{0.sFlexion}\t{0.oEntry.sStem}\t{0.sMorph}\t{0.metagfx}\t{0.metaph2}\t{0.oEntry.lx}\t{0.oEntry.se}\t{0.oEntry.et}\t{0.oEntry.di}{2}\t{1}{0.nOccur}\t{0.nDup}\t{0.nMulti}\t{0.fFreq:.15f}\t{0.cFq}\n".format(self, sOccurs, "/"+self.cDic if self.cDic != "*" else "")

    @classmethod
    def simpleHeader (cls):
        return "# :POS ;LEX ~SEM =FQ /DIC\n"

    def getGrammarCheckerRepr (self):
        return "{0.sFlexion}\t{0.oEntry.lemma}\t{1}\n".format(self, self._getSimpleTags())
................................................................................
        "ipre": ":Ip", "iimp": ":Iq", "ipsi": ":Is", "ifut": ":If",
        "spre": ":Sp", "simp": ":Sq", "cond": ":K", "impe": ":E",
        "1sg": ":1s", "1isg": ":1ś", "1jsg": ":1ŝ", "2sg": ":2s", "3sg": ":3s", "1pl": ":1p", "2pl": ":2p", "3pl": ":3p", "3pl!": ":3p!",
        "prepv": ":Rv", "prep": ":R", "loc.prep": ":Ŕ",
        "detpos": ":Dp", "detdem": ":Dd", "detind": ":Di", "detneg": ":Dn", "detex": ":De", "det": ":D",
        "advint": ":U",
        "prodem": ":Od", "proind": ":Oi", "proint": ":Ot", "proneg": ":On", "prorel": ":Or", "proadv": ":Ow",
        "properobj": ":Oo", "propersuj": ":Os", "1pe": ":O1", "2pe": ":O2", "3pe": ":O3", "preverb": ":Ov",
        "cjco": ":Cc", "cjsub": ":Cs", "cj": ":C", "loc.cj": ":Ĉ", "loc.cjsub": ":Ĉs",
        "prn": ":M1", "patr": ":M2", "loc.patr": ":Ḿ2", "npr": ":MP", "nompr": ":NM",
        "pfx": ":Zp", "sfx": ":Zs",
        "div": ":H",
        "err": ":#",
        # LEX
        "symb": ";S"
................................................................................
            s += "/" + self.oEntry.di
        return s

    def keyTriNat (self):
        return (self.sFlexion.translate(CHARMAP), self.sMorph)

    def keyFreq (self):
        return (100-self.fFreq, self.oEntry.sStem, self.sFlexion)

    def keyOcc (self):
        return (self.nOccur, self.oEntry.sStem, self.sFlexion)

    def keyIdx (self):
        return self.oEntry.iD

    def keyFlexion (self):
        return self.sFlexion

................................................................................
                hDst.write(str(t)+"\n")
            for e in self.dFlexions.items():
                hDst.write("{} - {}\n".format(e[0], e[1]))



def main ():

    xParser = argparse.ArgumentParser()
    xParser.add_argument("-v", "--verdic", help="set dictionary version, i.e. 5.4", type=str, default="X.Y.z")
    xParser.add_argument("-m", "--mode", help="0: no tags,  1: Hunspell tags (default),  2: All tags", type=int, choices=[0, 1, 2], default=1)
    xParser.add_argument("-u", "--uncompress", help="do not use Hunspell compression", action="store_true")
    xParser.add_argument("-s", "--simplify", help="no virtual lemmas", action="store_true")
    xParser.add_argument("-sv", "--spellvariants", help="generate spell variants", action="store_true")
    xParser.add_argument("-gl", "--grammalecte", help="copy generated files to Grammalecte folders", action="store_true")
................................................................................
    oFrenchDict.calculateStats(oStatsLex, spfStats)

    ### écriture des paquets
    echo("Création des paquets...")

    spLexiconDestGL = "../../../lexicons"  if xArgs.grammalecte  else ""
    spLibreOfficeExtDestGL = "../oxt/Dictionnaires/dictionaries"  if xArgs.grammalecte  else ""
    spMozillaExtDestGL = ""  if xArgs.grammalecte  else "" # no more Hunspell dictionaries in Mozilla extensions for now
    spDataDestGL = "../data"  if xArgs.grammalecte  else ""

    if not xArgs.uncompress:
        oFrenchDict.defineAbreviatedTags(xArgs.mode, spfStats)
    oFrenchDict.createFiles(spBuild, [dMODERNE, dTOUTESVAR, dCLASSIQUE, dREFORME1990], xArgs.mode, xArgs.simplify)
    oFrenchDict.createLexiconPackages(spBuild, xArgs.verdic, oStatsLex, spLexiconDestGL)
    oFrenchDict.createFileIfqForDB(spBuild)

Modified gc_lang/fr/dictionnaire/lexique/french.tagset.txt from [b4e8126915] to [4a587bac86].

51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
..
90
91
92
93
94
95
96

97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
...
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
        avec verbe auxilaire être ?         _________________________/  |
        avec verbe auxilaire avoir ?        ____________________________/

    Infinitif                   :Y
    Participe présent           :P
    Participe passé             :Q

    Indicatif présent           :Ip         1re personne du singulier   :1s  (forme interrogative: 1ś ou 1ŝ)
    Indicatif imparfait         :Iq         2e personne du singulier    :2s
    Indicatif passé simple      :Is         3e personne du singulier    :3s
    Indicatif futur             :If         1re personne du pluriel     :1p
                                            2e personne du pluriel      :2p
    Subjonctif présent          :Sp         3e personne du pluriel      :3p
    Subjonctif imparfait        :Sq
    
    Conditionnel                :K
    Impératif                   :E


//  MOTS GRAMMATICAUX

    Mot grammatical                     :G
................................................................................
    pronom indéterminé                  :Oi
    pronom interrogatif                 :Oj
    pronom relatif                      :Or
    pronom de négation                  :On
    pronom adverbial                    :Ow
    pronom personnel sujet              :Os
    pronom personnel objet              :Oo



========== MÉMO ==========

:A  Adjectif
:B  Nombre cardinal
:C  Conjonction
:D  Déterminant
:E  Impératif (verbe)
:F  
:G  Mot grammatical
:H  <Hors-norme>
:I  Indicatif (verbe)
:J  Interjection
:K  Conditionnel (verbe)
:L  Locution
:M  Nom propre (sans article)
................................................................................
:U  Adverbe interrogatif
:V  Verbe (quelle que soit la forme)
:W  Adverbe
:X  Adverbe de négation
:Y  Infinitif (verbe)
:Z  Préfixe ou suffixe

:1  1re personne (verbe)
:2  2e personne (verbe)
:3  3e personne (verbe)

:e  épicène
:f  féminin
:i  invariable
:m  masculin
:p  pluriel
:s  singulier







|
|
|
|
|
|

|







 







>









|







 







|
|
|







51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
..
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
...
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
        avec verbe auxilaire être ?         _________________________/  |
        avec verbe auxilaire avoir ?        ____________________________/

    Infinitif                   :Y
    Participe présent           :P
    Participe passé             :Q

    Indicatif présent           :Ip         1ʳᵉ personne du singulier   :1s  (forme interrogative: 1ś ou 1ŝ)
    Indicatif imparfait         :Iq         2 personne du singulier    :2s
    Indicatif passé simple      :Is         3 personne du singulier    :3s
    Indicatif futur             :If         1ʳᵉ personne du pluriel     :1p
                                            2 personne du pluriel      :2p
    Subjonctif présent          :Sp         3 personne du pluriel      :3p
    Subjonctif imparfait        :Sq

    Conditionnel                :K
    Impératif                   :E


//  MOTS GRAMMATICAUX

    Mot grammatical                     :G
................................................................................
    pronom indéterminé                  :Oi
    pronom interrogatif                 :Oj
    pronom relatif                      :Or
    pronom de négation                  :On
    pronom adverbial                    :Ow
    pronom personnel sujet              :Os
    pronom personnel objet              :Oo
    préverbe (pron. p. obj. + ne)       :Ov  (l’étiquette pour “ne” est une inexactitude commode)


========== MÉMO ==========

:A  Adjectif
:B  Nombre cardinal
:C  Conjonction
:D  Déterminant
:E  Impératif (verbe)
:F
:G  Mot grammatical
:H  <Hors-norme>
:I  Indicatif (verbe)
:J  Interjection
:K  Conditionnel (verbe)
:L  Locution
:M  Nom propre (sans article)
................................................................................
:U  Adverbe interrogatif
:V  Verbe (quelle que soit la forme)
:W  Adverbe
:X  Adverbe de négation
:Y  Infinitif (verbe)
:Z  Préfixe ou suffixe

:1  1ʳᵉ personne (verbe)
:2  2 personne (verbe)
:3  3 personne (verbe)

:e  épicène
:f  féminin
:i  invariable
:m  masculin
:p  pluriel
:s  singulier

Modified gc_lang/fr/dictionnaire/orthographe/FRANCAIS.dic from [c0e9108156] to [cdf07a1aa1].

more than 10,000 changes

Modified gc_lang/fr/modules-js/conj.js from [f544af05b0] to [8124143953].

83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
        return this._lVtyp[this._dVerb[sVerb][0]];
    },

    getSimil: function (sWord, sMorph, bSubst=false) {
        if (!sMorph.includes(":V")) {
            return new Set();
        }
        let sInfi = sMorph.slice(1, sMorph.indexOf(" "));
        let aSugg = new Set();
        let tTags = this._getTags(sInfi);
        if (tTags) {
            if (!bSubst) {
                // we suggest conjugated forms
                if (sMorph.includes(":V1")) {
                    aSugg.add(sInfi);







|







83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
        return this._lVtyp[this._dVerb[sVerb][0]];
    },

    getSimil: function (sWord, sMorph, bSubst=false) {
        if (!sMorph.includes(":V")) {
            return new Set();
        }
        let sInfi = sMorph.slice(1, sMorph.indexOf("/"));
        let aSugg = new Set();
        let tTags = this._getTags(sInfi);
        if (tTags) {
            if (!bSubst) {
                // we suggest conjugated forms
                if (sMorph.includes(":V1")) {
                    aSugg.add(sInfi);

Modified gc_lang/fr/modules-js/gce_analyseur.js from [e2613ddcd2] to [427ee71140].

1
2








3
4
5
6
7
8
9
..
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43

44
45
46

47
48
49

50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
..
99
100
101
102
103
104
105
106
107
108
109
110
111

112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
//// GRAMMAR CHECKING ENGINE PLUGIN: Parsing functions for French language
/*jslint esversion: 6*/









function rewriteSubject (s1, s2) {
    // s1 is supposed to be prn/patr/npr (M[12P])
    if (s2 == "lui") {
        return "ils";
    }
    if (s2 == "moi") {
................................................................................
    if (s2 == "vous") {
        return "vous";
    }
    if (s2 == "eux") {
        return "ils";
    }
    if (s2 == "elle" || s2 == "elles") {
        // We don’t check if word exists in _dAnalyses, for it is assumed it has been done before
        if (cregex.mbNprMasNotFem(_dAnalyses.gl_get(s1, ""))) {
            return "ils";
        }
        // si épicène, indéterminable, mais OSEF, le féminin l’emporte
        return "elles";
    }
    return s1 + " et " + s2;
}

function apposition (sWord1, sWord2) {
    // returns true if nom + nom (no agreement required)
    // We don’t check if word exists in _dAnalyses, for it is assumed it has been done before
    return cregex.mbNomNotAdj(_dAnalyses.gl_get(sWord2, "")) && cregex.mbPpasNomNotAdj(_dAnalyses.gl_get(sWord1, ""));
}

function isAmbiguousNAV (sWord) {
    // words which are nom|adj and verb are ambiguous (except être and avoir)
    if (!_dAnalyses.has(sWord) && !_storeMorphFromFSA(sWord)) {

        return false;
    }
    if (!cregex.mbNomAdj(_dAnalyses.gl_get(sWord, "")) || sWord == "est") {

        return false;
    }
    if (cregex.mbVconj(_dAnalyses.gl_get(sWord, "")) && !cregex.mbMG(_dAnalyses.gl_get(sWord, ""))) {

        return true;
    }
    return false;
}

function isAmbiguousAndWrong (sWord1, sWord2, sReqMorphNA, sReqMorphConj) {
    //// use it if sWord1 won’t be a verb; word2 is assumed to be true via isAmbiguousNAV
    // We don’t check if word exists in _dAnalyses, for it is assumed it has been done before
    let a2 = _dAnalyses.gl_get(sWord2, null);
    if (!a2 || a2.length === 0) {
        return false;
    }
    if (cregex.checkConjVerb(a2, sReqMorphConj)) {
        // verb word2 is ok
        return false;
    }
    let a1 = _dAnalyses.gl_get(sWord1, null);
    if (!a1 || a1.length === 0) {
        return false;
    }
    if (cregex.checkAgreement(a1, a2) && (cregex.mbAdj(a2) || cregex.mbAdj(a1))) {
        return false;
    }
    return true;
}

function isVeryAmbiguousAndWrong (sWord1, sWord2, sReqMorphNA, sReqMorphConj, bLastHopeCond) {
    //// use it if sWord1 can be also a verb; word2 is assumed to be true via isAmbiguousNAV
    // We don’t check if word exists in _dAnalyses, for it is assumed it has been done before
    let a2 = _dAnalyses.gl_get(sWord2, null);
    if (!a2 || a2.length === 0) {
        return false;
    }
    if (cregex.checkConjVerb(a2, sReqMorphConj)) {
        // verb word2 is ok
        return false;
    }
    let a1 = _dAnalyses.gl_get(sWord1, null);
    if (!a1 || a1.length === 0) {
        return false;
    }
    if (cregex.checkAgreement(a1, a2) && (cregex.mbAdj(a2) || cregex.mbAdjNb(a1))) {
        return false;
    }
    // now, we know there no agreement, and conjugation is also wrong
    if (cregex.isNomAdj(a1)) {
................................................................................
    if (bLastHopeCond) {
        return true;
    }
    return false;
}

function checkAgreement (sWord1, sWord2) {
    // We don’t check if word exists in _dAnalyses, for it is assumed it has been done before
    let a2 = _dAnalyses.gl_get(sWord2, null);
    if (!a2 || a2.length === 0) {
        return true;
    }
    let a1 = _dAnalyses.gl_get(sWord1, null);

    if (!a1 || a1.length === 0) {
        return true;
    }
    return cregex.checkAgreement(a1, a2);
}

function mbUnit (s) {
    if (/[µ\/⁰¹²³⁴⁵⁶⁷⁸⁹Ωℓ·]/.test(s)) {
        return true;
    }
    if (s.length > 1 && s.length < 16 && s.slice(0, 1).gl_isLowerCase() && (!s.slice(1).gl_isLowerCase() || /[0-9]/.test(s))) {
        return true;
    }
    return false;
}


//// Syntagmes

const _zEndOfNG1 = new RegExp ("^ *$|^ +(?:, +|)(?:n(?:’|e |o(?:u?s|tre) )|l(?:’|e(?:urs?|s|) |a )|j(?:’|e )|m(?:’|es? |a |on )|t(?:’|es? |a |u )|s(?:’|es? |a )|c(?:’|e(?:t|tte|s|) )|ç(?:a |’)|ils? |vo(?:u?s|tre) )");
const _zEndOfNG2 = new RegExp ("^ +([a-zà-öA-Zø-ÿÀ-Ö0-9_Ø-ßĀ-ʯ][a-zà-öA-Zø-ÿÀ-Ö0-9_Ø-ßĀ-ʯ-]+)");
const _zEndOfNG3 = new RegExp ("^ *, +([a-zà-öA-Zø-ÿÀ-Ö0-9_Ø-ßĀ-ʯ][a-zà-öA-Zø-ÿÀ-Ö0-9_Ø-ßĀ-ʯ-]+)");

function isEndOfNG (dDA, s, iOffset) {
    if (_zEndOfNG1.test(s)) {
        return true;
    }
    let m = _zEndOfNG2.gl_exec2(s, ["$"]);
    if (m && morphex(dDA, [iOffset+m.start[1], m[1]], ":[VR]", ":[NAQP]")) {
        return true;
    }
    m = _zEndOfNG3.gl_exec2(s, ["$"]);
    if (m && !morph(dDA, [iOffset+m.start[1], m[1]], ":[NA]", false)) {
        return true;
    }
    return false;
}


const _zNextIsNotCOD1 = new RegExp ("^ *,");
const _zNextIsNotCOD2 = new RegExp ("^ +(?:[mtsnj](e +|’)|[nv]ous |tu |ils? |elles? )");
const _zNextIsNotCOD3 = new RegExp ("^ +([a-zéèî][a-zà-öA-Zø-ÿÀ-ÖØ-ßĀ-ʯ-]+)");

function isNextNotCOD (dDA, s, iOffset) {
    if (_zNextIsNotCOD1.test(s) || _zNextIsNotCOD2.test(s)) {
        return true;
    }
    let m = _zNextIsNotCOD3.gl_exec2(s, ["$"]);
    if (m && morphex(dDA, [iOffset+m.start[1], m[1]], ":[123][sp]", ":[DM]")) {
        return true;
    }
    return false;
}


const _zNextIsVerb1 = new RegExp ("^ +[nmts](?:e |’)");
const _zNextIsVerb2 = new RegExp ("^ +([a-zà-öA-Zø-ÿÀ-Ö0-9_Ø-ßĀ-ʯ][a-zà-öA-Zø-ÿÀ-Ö0-9_Ø-ßĀ-ʯ-]+)");

function isNextVerb (dDA, s, iOffset) {
    if (_zNextIsVerb1.test(s)) {
        return true;
    }
    let m = _zNextIsVerb2.gl_exec2(s, ["$"]);
    if (m && morph(dDA, [iOffset+m.start[1], m[1]], ":[123][sp]", false)) {
        return true;
    }
    return false;
}


//// Exceptions

const aREGULARPLURAL = new Set(["abricot", "amarante", "aubergine", "acajou", "anthracite", "brique", "caca", "café",
                                "carotte", "cerise", "chataigne", "corail", "citron", "crème", "grave", "groseille",
                                "jonquille", "marron", "olive", "pervenche", "prune", "sable"]);
const aSHOULDBEVERB = new Set(["aller", "manger"]);


>
>
>
>
>
>
>
>







 







<
|










|
<




|
>


<
>


<
>







|
<
|






|
|










|
<
|






|
|







 







|
<
|


<
>
|












<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<










1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
..
26
27
28
29
30
31
32

33
34
35
36
37
38
39
40
41
42
43
44

45
46
47
48
49
50
51
52

53
54
55

56
57
58
59
60
61
62
63
64

65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84

85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
...
104
105
106
107
108
109
110
111

112
113
114

115
116
117
118
119
120
121
122
123
124
125
126
127
128





















































129
130
131
132
133
134
135
136
137
138
//// GRAMMAR CHECKING ENGINE PLUGIN: Parsing functions for French language
/*jslint esversion: 6*/

function g_morphVC (dToken, sPattern, sNegPattern="") {
    let nEnd = dToken["sValue"].lastIndexOf("-");
    if (dToken["sValue"].includes("-t-")) {
        nEnd = nEnd - 2;
    }
    return g_morph(dToken, sPattern, sNegPattern, 0, nEnd, false);
}

function rewriteSubject (s1, s2) {
    // s1 is supposed to be prn/patr/npr (M[12P])
    if (s2 == "lui") {
        return "ils";
    }
    if (s2 == "moi") {
................................................................................
    if (s2 == "vous") {
        return "vous";
    }
    if (s2 == "eux") {
        return "ils";
    }
    if (s2 == "elle" || s2 == "elles") {

        if (cregex.mbNprMasNotFem(_oSpellChecker.getMorph(s1))) {
            return "ils";
        }
        // si épicène, indéterminable, mais OSEF, le féminin l’emporte
        return "elles";
    }
    return s1 + " et " + s2;
}

function apposition (sWord1, sWord2) {
    // returns true if nom + nom (no agreement required)
    return sWord2.length < 2 || (cregex.mbNomNotAdj(_oSpellChecker.getMorph(sWord2)) && cregex.mbPpasNomNotAdj(_oSpellChecker.getMorph(sWord1)));

}

function isAmbiguousNAV (sWord) {
    // words which are nom|adj and verb are ambiguous (except être and avoir)
    let lMorph = _oSpellChecker.getMorph(sWord);
    if (lMorph.length === 0) {
        return false;
    }

    if (!cregex.mbNomAdj(lMorph) || sWord == "est") {
        return false;
    }

    if (cregex.mbVconj(lMorph) && !cregex.mbMG(lMorph)) {
        return true;
    }
    return false;
}

function isAmbiguousAndWrong (sWord1, sWord2, sReqMorphNA, sReqMorphConj) {
    //// use it if sWord1 won’t be a verb; word2 is assumed to be true via isAmbiguousNAV
    let a2 = _oSpellChecker.getMorph(sWord2);

    if (a2.length === 0) {
        return false;
    }
    if (cregex.checkConjVerb(a2, sReqMorphConj)) {
        // verb word2 is ok
        return false;
    }
    let a1 = _oSpellChecker.getMorph(sWord1);
    if (a1.length === 0) {
        return false;
    }
    if (cregex.checkAgreement(a1, a2) && (cregex.mbAdj(a2) || cregex.mbAdj(a1))) {
        return false;
    }
    return true;
}

function isVeryAmbiguousAndWrong (sWord1, sWord2, sReqMorphNA, sReqMorphConj, bLastHopeCond) {
    //// use it if sWord1 can be also a verb; word2 is assumed to be true via isAmbiguousNAV
    let a2 = _oSpellChecker.getMorph(sWord2);

    if (a2.length === 0) {
        return false;
    }
    if (cregex.checkConjVerb(a2, sReqMorphConj)) {
        // verb word2 is ok
        return false;
    }
    let a1 = _oSpellChecker.getMorph(sWord1);
    if (a1.length === 0) {
        return false;
    }
    if (cregex.checkAgreement(a1, a2) && (cregex.mbAdj(a2) || cregex.mbAdjNb(a1))) {
        return false;
    }
    // now, we know there no agreement, and conjugation is also wrong
    if (cregex.isNomAdj(a1)) {
................................................................................
    if (bLastHopeCond) {
        return true;
    }
    return false;
}

function checkAgreement (sWord1, sWord2) {
    let a2 = _oSpellChecker.getMorph(sWord2);

    if (a2.length === 0) {
        return true;
    }

    let a1 = _oSpellChecker.getMorph(sWord1);
    if (a1.length === 0) {
        return true;
    }
    return cregex.checkAgreement(a1, a2);
}

function mbUnit (s) {
    if (/[µ\/⁰¹²³⁴⁵⁶⁷⁸⁹Ωℓ·]/.test(s)) {
        return true;
    }
    if (s.length > 1 && s.length < 16 && s.slice(0, 1).gl_isLowerCase() && (!s.slice(1).gl_isLowerCase() || /[0-9]/.test(s))) {
        return true;
    }





















































    return false;
}


//// Exceptions

const aREGULARPLURAL = new Set(["abricot", "amarante", "aubergine", "acajou", "anthracite", "brique", "caca", "café",
                                "carotte", "cerise", "chataigne", "corail", "citron", "crème", "grave", "groseille",
                                "jonquille", "marron", "olive", "pervenche", "prune", "sable"]);
const aSHOULDBEVERB = new Set(["aller", "manger"]);

Modified gc_lang/fr/modules-js/gce_suggestions.js from [5849d4fa9b] to [ae7eaca80b].

7
8
9
10
11
12
13













14
15




16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
..
50
51
52
53
54
55
56



57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
...
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125




126
127
128
129
130
131
132
133
134
...
135
136
137
138
139
140
141



142
143
144
145
146
147
148
149
150
151
152
153
154
155
...
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
...
193
194
195
196
197
198
199
200

201
202
203
204
205
206
207
208
209
210
...
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
...
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
...
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
...
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
...
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
...
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
...
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
...
486
487
488
489
490
491
492
493
494




495
496
497
498
499
500
501



502
503
504
505
506
507
508
...
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
    var mfsp = require("resource://grammalecte/fr/mfsp.js");
    var phonet = require("resource://grammalecte/fr/phonet.js");
}


//// verbs














function suggVerb (sFlex, sWho, funcSugg2=null) {
    // we don’t check if word exists in _dAnalyses, for it is assumed it has been done before




    let aSugg = new Set();
    for (let sStem of stem(sFlex)) {
        let tTags = conj._getTags(sStem);
        if (tTags) {
            // we get the tense
            let aTense = new Set();
            for (let sMorph of _dAnalyses.gl_get(sFlex, [])) {
                let m;
                let zVerb = new RegExp (">"+sStem+" .*?(:(?:Y|I[pqsf]|S[pq]|K))", "g");
                while ((m = zVerb.exec(sMorph)) !== null) {
                    // stem must be used in regex to prevent confusion between different verbs (e.g. sauras has 2 stems: savoir and saurer)
                    if (m) {
                        if (m[1] === ":Y") {
                            aTense.add(":Ip");
                            aTense.add(":Iq");
                            aTense.add(":Is");
................................................................................
    if (funcSugg2) {
        let aSugg2 = funcSugg2(sFlex);
        if (aSugg2.size > 0) {
            aSugg.add(aSugg2);
        }
    }
    if (aSugg.size > 0) {



        return Array.from(aSugg).join("|");
    }
    return "";
}

function suggVerbPpas (sFlex, sWhat=null) {
    let aSugg = new Set();
    for (let sStem of stem(sFlex)) {
        let tTags = conj._getTags(sStem);
        if (tTags) {
            if (!sWhat) {
                aSugg.add(conj._getConjWithTags(sStem, tTags, ":PQ", ":Q1"));
                aSugg.add(conj._getConjWithTags(sStem, tTags, ":PQ", ":Q2"));
                aSugg.add(conj._getConjWithTags(sStem, tTags, ":PQ", ":Q3"));
                aSugg.add(conj._getConjWithTags(sStem, tTags, ":PQ", ":Q4"));
................................................................................
        return Array.from(aSugg).join("|");
    }
    return "";
}

function suggVerbTense (sFlex, sTense, sWho) {
    let aSugg = new Set();
    for (let sStem of stem(sFlex)) {
        if (conj.hasConj(sStem, sTense, sWho)) {
            aSugg.add(conj.getConj(sStem, sTense, sWho));
        }
    }
    if (aSugg.size > 0) {
        return Array.from(aSugg).join("|");
    }
    return "";
}

function suggVerbImpe (sFlex) {




    let aSugg = new Set();
    for (let sStem of stem(sFlex)) {
        let tTags = conj._getTags(sStem);
        if (tTags) {
            if (conj._hasConjWithTags(tTags, ":E", ":2s")) {
                aSugg.add(conj._getConjWithTags(sStem, tTags, ":E", ":2s"));
            }
            if (conj._hasConjWithTags(tTags, ":E", ":1p")) {
                aSugg.add(conj._getConjWithTags(sStem, tTags, ":E", ":1p"));
................................................................................
            }
            if (conj._hasConjWithTags(tTags, ":E", ":2p")) {
                aSugg.add(conj._getConjWithTags(sStem, tTags, ":E", ":2p"));
            }
        }
    }
    if (aSugg.size > 0) {



        return Array.from(aSugg).join("|");
    }
    return "";
}

function suggVerbInfi (sFlex) {
    return stem(sFlex).filter(sStem => conj.isVerb(sStem)).join("|");
}


const _dQuiEst = new Map ([
    ["je", ":1s"], ["j’", ":1s"], ["j’en", ":1s"], ["j’y", ":1s"],
    ["tu", ":2s"], ["il", ":3s"], ["on", ":3s"], ["elle", ":3s"],
    ["nous", ":1p"], ["vous", ":2p"], ["ils", ":3p"], ["elles", ":3p"]
................................................................................
    if (!sWho) {
        if (sSuj[0].gl_isLowerCase()) { // pas un pronom, ni un nom propre
            return "";
        }
        sWho = ":3s";
    }
    let aSugg = new Set();
    for (let sStem of stem(sFlex)) {
        let tTags = conj._getTags(sStem);
        if (tTags) {
            for (let sTense of lMode) {
                if (conj._hasConjWithTags(tTags, sTense, sWho)) {
                    aSugg.add(conj._getConjWithTags(sStem, tTags, sTense, sWho));
                }
            }
................................................................................
}

//// Nouns and adjectives

function suggPlur (sFlex, sWordToAgree=null) {
    // returns plural forms assuming sFlex is singular
    if (sWordToAgree) {
        if (!_dAnalyses.has(sWordToAgree) && !_storeMorphFromFSA(sWordToAgree)) {

            return "";
        }
        let sGender = cregex.getGender(_dAnalyses.gl_get(sWordToAgree, []));
        if (sGender == ":m") {
            return suggMasPlur(sFlex);
        } else if (sGender == ":f") {
            return suggFemPlur(sFlex);
        }
    }
    let aSugg = new Set();
................................................................................
        return Array.from(aSugg).join("|");
    }
    return "";
}

function suggMasSing (sFlex, bSuggSimil=false) {
    // returns masculine singular forms
    // we don’t check if word exists in _dAnalyses, for it is assumed it has been done before
    let aSugg = new Set();
    for (let sMorph of _dAnalyses.gl_get(sFlex, [])) {
        if (!sMorph.includes(":V")) {
            // not a verb
            if (sMorph.includes(":m") || sMorph.includes(":e")) {
                aSugg.add(suggSing(sFlex));
            } else {
                let sStem = cregex.getLemmaOfMorph(sMorph);
                if (mfsp.isFemForm(sStem)) {
................................................................................
        return Array.from(aSugg).join("|");
    }
    return "";
}

function suggMasPlur (sFlex, bSuggSimil=false) {
    // returns masculine plural forms
    // we don’t check if word exists in _dAnalyses, for it is assumed it has been done before
    let aSugg = new Set();
    for (let sMorph of _dAnalyses.gl_get(sFlex, [])) {
        if (!sMorph.includes(":V")) {
            // not a verb
            if (sMorph.includes(":m") || sMorph.includes(":e")) {
                aSugg.add(suggPlur(sFlex));
            } else {
                let sStem = cregex.getLemmaOfMorph(sMorph);
                if (mfsp.isFemForm(sStem)) {
................................................................................
    }
    return "";
}


function suggFemSing (sFlex, bSuggSimil=false) {
    // returns feminine singular forms
    // we don’t check if word exists in _dAnalyses, for it is assumed it has been done before
    let aSugg = new Set();
    for (let sMorph of _dAnalyses.gl_get(sFlex, [])) {
        if (!sMorph.includes(":V")) {
            // not a verb
            if (sMorph.includes(":f") || sMorph.includes(":e")) {
                aSugg.add(suggSing(sFlex));
            } else {
                let sStem = cregex.getLemmaOfMorph(sMorph);
                if (mfsp.isFemForm(sStem)) {
................................................................................
        return Array.from(aSugg).join("|");
    }
    return "";
}

function suggFemPlur (sFlex, bSuggSimil=false) {
    // returns feminine plural forms
    // we don’t check if word exists in _dAnalyses, for it is assumed it has been done before
    let aSugg = new Set();
    for (let sMorph of _dAnalyses.gl_get(sFlex, [])) {
        if (!sMorph.includes(":V")) {
            // not a verb
            if (sMorph.includes(":f") || sMorph.includes(":e")) {
                aSugg.add(suggPlur(sFlex));
            } else {
                let sStem = cregex.getLemmaOfMorph(sMorph);
                if (mfsp.isFemForm(sStem)) {
................................................................................
    if (aSugg.size > 0) {
        return Array.from(aSugg).join("|");
    }
    return "";
}

function hasFemForm (sFlex) {
    for (let sStem of stem(sFlex)) {
        if (mfsp.isFemForm(sStem) || conj.hasConj(sStem, ":PQ", ":Q3")) {
            return true;
        }
    }
    if (phonet.hasSimil(sFlex, ":f")) {
        return true;
    }
    return false;
}

function hasMasForm (sFlex) {
    for (let sStem of stem(sFlex)) {
        if (mfsp.isFemForm(sStem) || conj.hasConj(sStem, ":PQ", ":Q1")) {
            // what has a feminine form also has a masculine form
            return true;
        }
    }
    if (phonet.hasSimil(sFlex, ":m")) {
        return true;
    }
    return false;
}

function switchGender (sFlex, bPlur=null) {
    // we don’t check if word exists in _dAnalyses, for it is assumed it has been done before
    let aSugg = new Set();
    if (bPlur === null) {
        for (let sMorph of _dAnalyses.gl_get(sFlex, [])) {
            if (sMorph.includes(":f")) {
                if (sMorph.includes(":s")) {
                    aSugg.add(suggMasSing(sFlex));
                } else if (sMorph.includes(":p")) {
                    aSugg.add(suggMasPlur(sFlex));
                }
            } else if (sMorph.includes(":m")) {
................................................................................
                } else {
                    aSugg.add(suggFemSing(sFlex));
                    aSugg.add(suggFemPlur(sFlex));
                }
            }
        }
    } else if (bPlur) {
        for (let sMorph of _dAnalyses.gl_get(sFlex, [])) {
            if (sMorph.includes(":f")) {
                aSugg.add(suggMasPlur(sFlex));
            } else if (sMorph.includes(":m")) {
                aSugg.add(suggFemPlur(sFlex));
            }
        }
    } else {
        for (let sMorph of _dAnalyses.gl_get(sFlex, [])) {
            if (sMorph.includes(":f")) {
                aSugg.add(suggMasSing(sFlex));
            } else if (sMorph.includes(":m")) {
                aSugg.add(suggFemSing(sFlex));
            }
        }
    }
................................................................................
        return Array.from(aSugg).join("|");
    }
    return "";
}

function switchPlural (sFlex) {
    let aSugg = new Set();
    for (let sMorph of _dAnalyses.gl_get(sFlex, [])) { // we don’t check if word exists in _dAnalyses, for it is assumed it has been done before
        if (sMorph.includes(":s")) {
            aSugg.add(suggPlur(sFlex));
        } else if (sMorph.includes(":p")) {
            aSugg.add(suggSing(sFlex));
        }
    }
    if (aSugg.size > 0) {
................................................................................
    return "";
}

function hasSimil (sWord, sPattern=null) {
    return phonet.hasSimil(sWord, sPattern);
}

function suggSimil (sWord, sPattern=null, bSubst=false) {
    // return list of words phonetically similar to sWord and whom POS is matching sPattern




    let aSugg = phonet.selectSimil(sWord, sPattern);
    for (let sMorph of _dAnalyses.gl_get(sWord, [])) {
        for (let e of conj.getSimil(sWord, sMorph, bSubst)) {
            aSugg.add(e);
        }
    }
    if (aSugg.size > 0) {



        return Array.from(aSugg).join("|");
    }
    return "";
}

function suggCeOrCet (sWord) {
    if (/^[aeéèêiouyâîï]/i.test(sWord)) {
................................................................................
    if (sWord[0] == "h" || sWord[0] == "H") {
        return "ce|cet";
    }
    return "ce";
}

function suggLesLa (sWord) {
    // we don’t check if word exists in _dAnalyses, for it is assumed it has been done before
    if (_dAnalyses.gl_get(sWord, []).some(s  =>  s.includes(":p"))) {
        return "les|la";
    }
    return "la";
}

function formatNumber (s) {
    let nLen = s.length;







>
>
>
>
>
>
>
>
>
>
>
>
>
|
<
>
>
>
>

|




|

|







 







>
>
>







|







 







|










|
>
>
>
>

|







 







>
>
>






|







 







|







 







|
>


|







 







<

|







 







<

|







 







<

|







 







<

|







 







|











|












<


|







 







|







|







 







|







 







|

>
>
>
>

|





>
>
>







 







<
|







7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27

28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
..
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
...
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
...
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
...
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
...
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
...
281
282
283
284
285
286
287

288
289
290
291
292
293
294
295
296
...
316
317
318
319
320
321
322

323
324
325
326
327
328
329
330
331
...
356
357
358
359
360
361
362

363
364
365
366
367
368
369
370
371
...
389
390
391
392
393
394
395

396
397
398
399
400
401
402
403
404
...
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452

453
454
455
456
457
458
459
460
461
462
...
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
...
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
...
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
...
540
541
542
543
544
545
546

547
548
549
550
551
552
553
554
    var mfsp = require("resource://grammalecte/fr/mfsp.js");
    var phonet = require("resource://grammalecte/fr/phonet.js");
}


//// verbs

function splitVerb (sVerb) {
    // renvoie le verbe et les pronoms séparément
    let iRight = sVerb.lastIndexOf("-");
    let sSuffix = sVerb.slice(iRight);
    sVerb = sVerb.slice(0, iRight);
    if (sVerb.endsWith("-t") || sVerb.endsWith("-le") || sVerb.endsWith("-la") || sVerb.endsWith("-les")) {
        iRight = sVerb.lastIndexOf("-");
        sSuffix = sVerb.slice(iRight) + sSuffix;
        sVerb = sVerb.slice(0, iRight);
    }
    return [sVerb, sSuffix];
}

function suggVerb (sFlex, sWho, funcSugg2=null, bVC=false) {

    let sSfx;
    if (bVC) {
        [sFlex, sSfx] = splitVerb(sFlex);
    }
    let aSugg = new Set();
    for (let sStem of _oSpellChecker.getLemma(sFlex)) {
        let tTags = conj._getTags(sStem);
        if (tTags) {
            // we get the tense
            let aTense = new Set();
            for (let sMorph of _oSpellChecker.getMorph(sFlex)) {
                let m;
                let zVerb = new RegExp (">"+sStem+"/.*?(:(?:Y|I[pqsf]|S[pq]|K))", "g");
                while ((m = zVerb.exec(sMorph)) !== null) {
                    // stem must be used in regex to prevent confusion between different verbs (e.g. sauras has 2 stems: savoir and saurer)
                    if (m) {
                        if (m[1] === ":Y") {
                            aTense.add(":Ip");
                            aTense.add(":Iq");
                            aTense.add(":Is");
................................................................................
    if (funcSugg2) {
        let aSugg2 = funcSugg2(sFlex);
        if (aSugg2.size > 0) {
            aSugg.add(aSugg2);
        }
    }
    if (aSugg.size > 0) {
        if (bVC) {
            return Array.from(aSugg).map((sSugg) => { return sSugg + sSfx; }).join("|");
        }
        return Array.from(aSugg).join("|");
    }
    return "";
}

function suggVerbPpas (sFlex, sWhat=null) {
    let aSugg = new Set();
    for (let sStem of _oSpellChecker.getLemma(sFlex)) {
        let tTags = conj._getTags(sStem);
        if (tTags) {
            if (!sWhat) {
                aSugg.add(conj._getConjWithTags(sStem, tTags, ":PQ", ":Q1"));
                aSugg.add(conj._getConjWithTags(sStem, tTags, ":PQ", ":Q2"));
                aSugg.add(conj._getConjWithTags(sStem, tTags, ":PQ", ":Q3"));
                aSugg.add(conj._getConjWithTags(sStem, tTags, ":PQ", ":Q4"));
................................................................................
        return Array.from(aSugg).join("|");
    }
    return "";
}

function suggVerbTense (sFlex, sTense, sWho) {
    let aSugg = new Set();
    for (let sStem of _oSpellChecker.getLemma(sFlex)) {
        if (conj.hasConj(sStem, sTense, sWho)) {
            aSugg.add(conj.getConj(sStem, sTense, sWho));
        }
    }
    if (aSugg.size > 0) {
        return Array.from(aSugg).join("|");
    }
    return "";
}

function suggVerbImpe (sFlex, bVC=false) {
    let sSfx;
    if (bVC) {
        [sFlex, sSfx] = splitVerb(sFlex);
    }
    let aSugg = new Set();
    for (let sStem of _oSpellChecker.getLemma(sFlex)) {
        let tTags = conj._getTags(sStem);
        if (tTags) {
            if (conj._hasConjWithTags(tTags, ":E", ":2s")) {
                aSugg.add(conj._getConjWithTags(sStem, tTags, ":E", ":2s"));
            }
            if (conj._hasConjWithTags(tTags, ":E", ":1p")) {
                aSugg.add(conj._getConjWithTags(sStem, tTags, ":E", ":1p"));
................................................................................
            }
            if (conj._hasConjWithTags(tTags, ":E", ":2p")) {
                aSugg.add(conj._getConjWithTags(sStem, tTags, ":E", ":2p"));
            }
        }
    }
    if (aSugg.size > 0) {
        if (bVC) {
            return Array.from(aSugg).map((sSugg) => { return sSugg + sSfx; }).join("|");
        }
        return Array.from(aSugg).join("|");
    }
    return "";
}

function suggVerbInfi (sFlex) {
    return _oSpellChecker.getLemma(sFlex).filter(sStem => conj.isVerb(sStem)).join("|");
}


const _dQuiEst = new Map ([
    ["je", ":1s"], ["j’", ":1s"], ["j’en", ":1s"], ["j’y", ":1s"],
    ["tu", ":2s"], ["il", ":3s"], ["on", ":3s"], ["elle", ":3s"],
    ["nous", ":1p"], ["vous", ":2p"], ["ils", ":3p"], ["elles", ":3p"]
................................................................................
    if (!sWho) {
        if (sSuj[0].gl_isLowerCase()) { // pas un pronom, ni un nom propre
            return "";
        }
        sWho = ":3s";
    }
    let aSugg = new Set();
    for (let sStem of _oSpellChecker.getLemma(sFlex)) {
        let tTags = conj._getTags(sStem);
        if (tTags) {
            for (let sTense of lMode) {
                if (conj._hasConjWithTags(tTags, sTense, sWho)) {
                    aSugg.add(conj._getConjWithTags(sStem, tTags, sTense, sWho));
                }
            }
................................................................................
}

//// Nouns and adjectives

function suggPlur (sFlex, sWordToAgree=null) {
    // returns plural forms assuming sFlex is singular
    if (sWordToAgree) {
        let lMorph = _oSpellChecker.getMorph(sWordToAgree);
        if (lMorph.length === 0) {
            return "";
        }
        let sGender = cregex.getGender(lMorph);
        if (sGender == ":m") {
            return suggMasPlur(sFlex);
        } else if (sGender == ":f") {
            return suggFemPlur(sFlex);
        }
    }
    let aSugg = new Set();
................................................................................
        return Array.from(aSugg).join("|");
    }
    return "";
}

function suggMasSing (sFlex, bSuggSimil=false) {
    // returns masculine singular forms

    let aSugg = new Set();
    for (let sMorph of _oSpellChecker.getMorph(sFlex)) {
        if (!sMorph.includes(":V")) {
            // not a verb
            if (sMorph.includes(":m") || sMorph.includes(":e")) {
                aSugg.add(suggSing(sFlex));
            } else {
                let sStem = cregex.getLemmaOfMorph(sMorph);
                if (mfsp.isFemForm(sStem)) {
................................................................................
        return Array.from(aSugg).join("|");
    }
    return "";
}

function suggMasPlur (sFlex, bSuggSimil=false) {
    // returns masculine plural forms

    let aSugg = new Set();
    for (let sMorph of _oSpellChecker.getMorph(sFlex)) {
        if (!sMorph.includes(":V")) {
            // not a verb
            if (sMorph.includes(":m") || sMorph.includes(":e")) {
                aSugg.add(suggPlur(sFlex));
            } else {
                let sStem = cregex.getLemmaOfMorph(sMorph);
                if (mfsp.isFemForm(sStem)) {
................................................................................
    }
    return "";
}


function suggFemSing (sFlex, bSuggSimil=false) {
    // returns feminine singular forms

    let aSugg = new Set();
    for (let sMorph of _oSpellChecker.getMorph(sFlex)) {
        if (!sMorph.includes(":V")) {
            // not a verb
            if (sMorph.includes(":f") || sMorph.includes(":e")) {
                aSugg.add(suggSing(sFlex));
            } else {
                let sStem = cregex.getLemmaOfMorph(sMorph);
                if (mfsp.isFemForm(sStem)) {
................................................................................
        return Array.from(aSugg).join("|");
    }
    return "";
}

function suggFemPlur (sFlex, bSuggSimil=false) {
    // returns feminine plural forms

    let aSugg = new Set();
    for (let sMorph of _oSpellChecker.getMorph(sFlex)) {
        if (!sMorph.includes(":V")) {
            // not a verb
            if (sMorph.includes(":f") || sMorph.includes(":e")) {
                aSugg.add(suggPlur(sFlex));
            } else {
                let sStem = cregex.getLemmaOfMorph(sMorph);
                if (mfsp.isFemForm(sStem)) {
................................................................................
    if (aSugg.size > 0) {
        return Array.from(aSugg).join("|");
    }
    return "";
}

function hasFemForm (sFlex) {
    for (let sStem of _oSpellChecker.getLemma(sFlex)) {
        if (mfsp.isFemForm(sStem) || conj.hasConj(sStem, ":PQ", ":Q3")) {
            return true;
        }
    }
    if (phonet.hasSimil(sFlex, ":f")) {
        return true;
    }
    return false;
}

function hasMasForm (sFlex) {
    for (let sStem of _oSpellChecker.getLemma(sFlex)) {
        if (mfsp.isFemForm(sStem) || conj.hasConj(sStem, ":PQ", ":Q1")) {
            // what has a feminine form also has a masculine form
            return true;
        }
    }
    if (phonet.hasSimil(sFlex, ":m")) {
        return true;
    }
    return false;
}

function switchGender (sFlex, bPlur=null) {

    let aSugg = new Set();
    if (bPlur === null) {
        for (let sMorph of _oSpellChecker.getMorph(sFlex)) {
            if (sMorph.includes(":f")) {
                if (sMorph.includes(":s")) {
                    aSugg.add(suggMasSing(sFlex));
                } else if (sMorph.includes(":p")) {
                    aSugg.add(suggMasPlur(sFlex));
                }
            } else if (sMorph.includes(":m")) {
................................................................................
                } else {
                    aSugg.add(suggFemSing(sFlex));
                    aSugg.add(suggFemPlur(sFlex));
                }
            }
        }
    } else if (bPlur) {
        for (let sMorph of _oSpellChecker.getMorph(sFlex)) {
            if (sMorph.includes(":f")) {
                aSugg.add(suggMasPlur(sFlex));
            } else if (sMorph.includes(":m")) {
                aSugg.add(suggFemPlur(sFlex));
            }
        }
    } else {
        for (let sMorph of _oSpellChecker.getMorph(sFlex)) {
            if (sMorph.includes(":f")) {
                aSugg.add(suggMasSing(sFlex));
            } else if (sMorph.includes(":m")) {
                aSugg.add(suggFemSing(sFlex));
            }
        }
    }
................................................................................
        return Array.from(aSugg).join("|");
    }
    return "";
}

function switchPlural (sFlex) {
    let aSugg = new Set();
    for (let sMorph of _oSpellChecker.getMorph(sFlex)) {
        if (sMorph.includes(":s")) {
            aSugg.add(suggPlur(sFlex));
        } else if (sMorph.includes(":p")) {
            aSugg.add(suggSing(sFlex));
        }
    }
    if (aSugg.size > 0) {
................................................................................
    return "";
}

function hasSimil (sWord, sPattern=null) {
    return phonet.hasSimil(sWord, sPattern);
}

function suggSimil (sWord, sPattern=null, bSubst=false, bVC=false) {
    // return list of words phonetically similar to sWord and whom POS is matching sPattern
    let sSfx;
    if (bVC) {
        [sWord, sSfx] = splitVerb(sWord);
    }
    let aSugg = phonet.selectSimil(sWord, sPattern);
    for (let sMorph of _oSpellChecker.getMorph(sWord)) {
        for (let e of conj.getSimil(sWord, sMorph, bSubst)) {
            aSugg.add(e);
        }
    }
    if (aSugg.size > 0) {
        if (bVC) {
            return Array.from(aSugg).map((sSugg) => { return sSugg + sSfx; }).join("|");
        }
        return Array.from(aSugg).join("|");
    }
    return "";
}

function suggCeOrCet (sWord) {
    if (/^[aeéèêiouyâîï]/i.test(sWord)) {
................................................................................
    if (sWord[0] == "h" || sWord[0] == "H") {
        return "ce|cet";
    }
    return "ce";
}

function suggLesLa (sWord) {

    if (_oSpellChecker.getMorph(sWord).some(s  =>  s.includes(":p"))) {
        return "les|la";
    }
    return "la";
}

function formatNumber (s) {
    let nLen = s.length;

Modified gc_lang/fr/modules-js/lexicographe.js from [ce143d1120] to [3f0da2f1ff].

69
70
71
72
73
74
75

76
77
78
79
80
81
82
...
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
...
207
208
209
210
211
212
213

214
215
216
217
218
219




220
221
222
223
224
225
226
...
237
238
239
240
241
242
243

244
245
246
247
248
249
250
251
252
253
254
...
257
258
259
260
261
262
263
264
265
266
267
268
269
270







271
272
273
274
275
276
277
...
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
...
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
...
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
    [':Dn', [" déterminant négatif,", "Déterminant négatif"]],
    [':Od', [" pronom démonstratif,", "Pronom démonstratif"]],
    [':Oi', [" pronom indéfini,", "Pronom indéfini"]],
    [':On', [" pronom indéfini négatif,", "Pronom indéfini négatif"]],
    [':Ot', [" pronom interrogatif,", "Pronom interrogatif"]],
    [':Or', [" pronom relatif,", "Pronom relatif"]],
    [':Ow', [" pronom adverbial,", "Pronom adverbial"]],

    [':Os', [" pronom personnel sujet,", "Pronom personnel sujet"]],
    [':Oo', [" pronom personnel objet,", "Pronom personnel objet"]],
    [':O1', [" 1ʳᵉ pers.,", "Pronom : 1ʳᵉ personne"]],
    [':O2', [" 2ᵉ pers.,", "Pronom : 2ᵉ personne"]],
    [':O3', [" 3ᵉ pers.,", "Pronom : 3ᵉ personne"]],
    [':C', [" conjonction,", "Conjonction"]],
    [':Ĉ', [" conjonction (él.),", "Conjonction (élément)"]],
................................................................................

    ['en', " pronom adverbial"],
    ["m'en", " (me) pronom personnel objet + (en) pronom adverbial"],
    ["t'en", " (te) pronom personnel objet + (en) pronom adverbial"],
    ["s'en", " (se) pronom personnel objet + (en) pronom adverbial"]
]);

const _dSeparator = new Map([
    ['.', "point"],
    ['·', "point médian"],
    ['…', "points de suspension"],
    [':', "deux-points"],
    [';', "point-virgule"],
    [',', "virgule"],
    ['?', "point d’interrogation"],
................................................................................
    ['–', "tiret demi-cadratin"],
    ['«', "guillemet ouvrant (chevrons)"],
    ['»', "guillemet fermant (chevrons)"],
    ['“', "guillemet ouvrant double"],
    ['”', "guillemet fermant double"],
    ['‘', "guillemet ouvrant"],
    ['’', "guillemet fermant"],

    ['/', "signe de la division"],
    ['+', "signe de l’addition"],
    ['*', "signe de la multiplication"],
    ['=', "signe de l’égalité"],
    ['<', "inférieur à"],
    ['>', "supérieur à"],




]);


class Lexicographe {

    constructor (oSpellChecker, oTokenizer, oLocGraph) {
        this.oSpellChecker = oSpellChecker;
................................................................................
    getInfoForToken (oToken) {
        // Token: .sType, .sValue, .nStart, .nEnd
        // return a object {sType, sValue, aLabel}
        let m = null;
        try {
            switch (oToken.sType) {
                case 'SEPARATOR':

                    return {
                        sType: oToken.sType,
                        sValue: oToken.sValue,
                        aLabel: [_dSeparator.gl_get(oToken.sValue, "caractère indéterminé")]
                    };
                    break;
                case 'NUM':
                    return {
                        sType: oToken.sType,
                        sValue: oToken.sValue,
                        aLabel: ["nombre"]
................................................................................
                case 'LINK':
                    return {
                        sType: oToken.sType,
                        sValue: oToken.sValue.slice(0, 40) + "…",
                        aLabel: ["hyperlien"]
                    };
                    break;
                case 'ELPFX':
                    let sTemp = oToken.sValue.replace("’", "").replace("'", "").replace("`", "").toLowerCase();
                    return {
                        sType: oToken.sType,
                        sValue: oToken.sValue,
                        aLabel: [_dElidedPrefix.gl_get(sTemp, "préfixe élidé inconnu")]
                    };







                    break;
                case 'FOLDERUNIX':
                    return {
                        sType: oToken.sType,
                        sValue: oToken.sValue.slice(0, 40) + "…",
                        aLabel: ["dossier UNIX (et dérivés)"]
                    };
................................................................................
                case 'FOLDERWIN':
                    return {
                        sType: oToken.sType,
                        sValue: oToken.sValue.slice(0, 40) + "…",
                        aLabel: ["dossier Windows"]
                    };
                    break;
                case 'ACRONYM':
                    return {
                        sType: oToken.sType,
                        sValue: oToken.sValue,
                        aLabel: ["Sigle ou acronyme"]
                    };
                    break;
                case 'WORD':
................................................................................
        let sRes = "";
        sTags = sTags.replace(/V([0-3][ea]?)[itpqnmr_eaxz]+/, "V$1");
        let m;
        while ((m = this._zTag.exec(sTags)) !== null) {
            sRes += _dTag.get(m[0])[0];
        }
        if (sRes.startsWith(" verbe") && !sRes.includes("infinitif")) {
            sRes += " [" + sTags.slice(1, sTags.indexOf(" ")) + "]";
        }
        if (!sRes) {
            return "#Erreur. Étiquette inconnue : [" + sTags + "]";
        }
        return sRes.gl_trimRight(",");
    }

................................................................................
        let aTokenList = this.getListOfTokens(sText.replace("'", "’").trim(), false);
        let iKey = 0;
        let aElem = [];
        do {
            let oToken = aTokenList[iKey];
            let sMorphLoc = '';
            let aTokenTempList = [oToken];
            if (oToken.sType == "WORD" || oToken.sType == "ELPFX"){
                let iKeyTree = iKey + 1;
                let oLocNode = this.oLocGraph[oToken.sValue.toLowerCase()];
                while (oLocNode) {
                    let oTokenNext = aTokenList[iKeyTree];
                    iKeyTree++;
                    if (oTokenNext) {
                        oLocNode = oLocNode[oTokenNext.sValue.toLowerCase()];







>







 







|







 







>






>
>
>
>







 







>



|







 







|






>
>
>
>
>
>
>







 







|







 







|







 







|







69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
...
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
...
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
...
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
...
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
...
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
...
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
...
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
    [':Dn', [" déterminant négatif,", "Déterminant négatif"]],
    [':Od', [" pronom démonstratif,", "Pronom démonstratif"]],
    [':Oi', [" pronom indéfini,", "Pronom indéfini"]],
    [':On', [" pronom indéfini négatif,", "Pronom indéfini négatif"]],
    [':Ot', [" pronom interrogatif,", "Pronom interrogatif"]],
    [':Or', [" pronom relatif,", "Pronom relatif"]],
    [':Ow', [" pronom adverbial,", "Pronom adverbial"]],
    [':Ov', ["", ""]],
    [':Os', [" pronom personnel sujet,", "Pronom personnel sujet"]],
    [':Oo', [" pronom personnel objet,", "Pronom personnel objet"]],
    [':O1', [" 1ʳᵉ pers.,", "Pronom : 1ʳᵉ personne"]],
    [':O2', [" 2ᵉ pers.,", "Pronom : 2ᵉ personne"]],
    [':O3', [" 3ᵉ pers.,", "Pronom : 3ᵉ personne"]],
    [':C', [" conjonction,", "Conjonction"]],
    [':Ĉ', [" conjonction (él.),", "Conjonction (élément)"]],
................................................................................

    ['en', " pronom adverbial"],
    ["m'en", " (me) pronom personnel objet + (en) pronom adverbial"],
    ["t'en", " (te) pronom personnel objet + (en) pronom adverbial"],
    ["s'en", " (se) pronom personnel objet + (en) pronom adverbial"]
]);

const _dChar = new Map([
    ['.', "point"],
    ['·', "point médian"],
    ['…', "points de suspension"],
    [':', "deux-points"],
    [';', "point-virgule"],
    [',', "virgule"],
    ['?', "point d’interrogation"],
................................................................................
    ['–', "tiret demi-cadratin"],
    ['«', "guillemet ouvrant (chevrons)"],
    ['»', "guillemet fermant (chevrons)"],
    ['“', "guillemet ouvrant double"],
    ['”', "guillemet fermant double"],
    ['‘', "guillemet ouvrant"],
    ['’', "guillemet fermant"],
    ['"', "guillemets droits (déconseillé en typographie)"],
    ['/', "signe de la division"],
    ['+', "signe de l’addition"],
    ['*', "signe de la multiplication"],
    ['=', "signe de l’égalité"],
    ['<', "inférieur à"],
    ['>', "supérieur à"],
    ['⩽', "inférieur ou égal à"],
    ['⩾', "supérieur ou égal à"],
    ['%', "signe de pourcentage"],
    ['‰', "signe pour mille"],
]);


class Lexicographe {

    constructor (oSpellChecker, oTokenizer, oLocGraph) {
        this.oSpellChecker = oSpellChecker;
................................................................................
    getInfoForToken (oToken) {
        // Token: .sType, .sValue, .nStart, .nEnd
        // return a object {sType, sValue, aLabel}
        let m = null;
        try {
            switch (oToken.sType) {
                case 'SEPARATOR':
                case 'SIGN':
                    return {
                        sType: oToken.sType,
                        sValue: oToken.sValue,
                        aLabel: [_dChar.gl_get(oToken.sValue, "caractère indéterminé")]
                    };
                    break;
                case 'NUM':
                    return {
                        sType: oToken.sType,
                        sValue: oToken.sValue,
                        aLabel: ["nombre"]
................................................................................
                case 'LINK':
                    return {
                        sType: oToken.sType,
                        sValue: oToken.sValue.slice(0, 40) + "…",
                        aLabel: ["hyperlien"]
                    };
                    break;
                case 'WORD_ELIDED':
                    let sTemp = oToken.sValue.replace("’", "").replace("'", "").replace("`", "").toLowerCase();
                    return {
                        sType: oToken.sType,
                        sValue: oToken.sValue,
                        aLabel: [_dElidedPrefix.gl_get(sTemp, "préfixe élidé inconnu")]
                    };
                    break;
                case 'WORD_ORDINAL':
                    return {
                        sType: oToken.sType,
                        sValue: oToken.sValue,
                        aLabel: ["nombre ordinal"]
                    };
                    break;
                case 'FOLDERUNIX':
                    return {
                        sType: oToken.sType,
                        sValue: oToken.sValue.slice(0, 40) + "…",
                        aLabel: ["dossier UNIX (et dérivés)"]
                    };
................................................................................
                case 'FOLDERWIN':
                    return {
                        sType: oToken.sType,
                        sValue: oToken.sValue.slice(0, 40) + "…",
                        aLabel: ["dossier Windows"]
                    };
                    break;
                case 'WORD_ACRONYM':
                    return {
                        sType: oToken.sType,
                        sValue: oToken.sValue,
                        aLabel: ["Sigle ou acronyme"]
                    };
                    break;
                case 'WORD':
................................................................................
        let sRes = "";
        sTags = sTags.replace(/V([0-3][ea]?)[itpqnmr_eaxz]+/, "V$1");
        let m;
        while ((m = this._zTag.exec(sTags)) !== null) {
            sRes += _dTag.get(m[0])[0];
        }
        if (sRes.startsWith(" verbe") && !sRes.includes("infinitif")) {
            sRes += " [" + sTags.slice(1, sTags.indexOf("/")) + "]";
        }
        if (!sRes) {
            return "#Erreur. Étiquette inconnue : [" + sTags + "]";
        }
        return sRes.gl_trimRight(",");
    }

................................................................................
        let aTokenList = this.getListOfTokens(sText.replace("'", "’").trim(), false);
        let iKey = 0;
        let aElem = [];
        do {
            let oToken = aTokenList[iKey];
            let sMorphLoc = '';
            let aTokenTempList = [oToken];
            if (oToken.sType == "WORD" || oToken.sType == "WORD_ELIDED"){
                let iKeyTree = iKey + 1;
                let oLocNode = this.oLocGraph[oToken.sValue.toLowerCase()];
                while (oLocNode) {
                    let oTokenNext = aTokenList[iKeyTree];
                    iKeyTree++;
                    if (oTokenNext) {
                        oLocNode = oLocNode[oTokenNext.sValue.toLowerCase()];

Modified gc_lang/fr/modules/conj.py from [c668aaf269] to [258383e97f].


1


2
3
4
5
6
7
8
..
25
26
27
28
29
30
31

32
33
34
35
36
37
38
..
52
53
54
55
56
57
58

59
60
61
62
63
64
65
66
67
68
..
96
97
98
99
100
101
102

103
104
105
106
107
108
109
...
138
139
140
141
142
143
144
145
146
147
148
149
150


151
152
153
154
155
156
157
...
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
...
287
288
289
290
291
292
293

294
295
296
297
298
299
300
...
309
310
311
312
313
314
315

316
317
318
319
320
321
322

323
324
325
326
327
328
329
...
346
347
348
349
350
351
352

353
354
355
356
357
358
359
...
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
...
400
401
402
403
404
405
406

407
408
409
410
411
412
413

# Grammalecte - Conjugueur


# License: GPL 3

import re
import traceback

from .conj_data import lVtyp as _lVtyp
from .conj_data import lTags as _lTags
................................................................................
_dGroup = { "0": "auxiliaire", "1": "1ᵉʳ groupe", "2": "2ᵉ groupe", "3": "3ᵉ groupe" }

_dTenseIdx = { ":PQ": 0, ":Ip": 1, ":Iq": 2, ":Is": 3, ":If": 4, ":K": 5, ":Sp": 6, ":Sq": 7, ":E": 8 }



def isVerb (sVerb):

    return sVerb in _dVerb


def getConj (sVerb, sTense, sWho):
    "returns conjugation (can be an empty string)"
    if sVerb not in _dVerb:
        return None
................................................................................
    "returns raw informations about sVerb"
    if sVerb not in _dVerb:
        return None
    return _lVtyp[_dVerb[sVerb][0]]


def getSimil (sWord, sMorph, bSubst=False):

    if ":V" not in sMorph:
        return set()
    sInfi = sMorph[1:sMorph.find(" ")]
    aSugg = set()
    tTags = _getTags(sInfi)
    if tTags:
        if not bSubst:
            # we suggest conjugated forms
            if ":V1" in sMorph:
                aSugg.add(sInfi)
................................................................................
            # if there is only one past participle (epi inv), unreliable.
            if len(aSugg) == 1:
                aSugg.clear()
    return aSugg


def getConjSimilInfiV1 (sInfi):

    if sInfi not in _dVerb:
        return set()
    aSugg = set()
    tTags = _getTags(sInfi)
    if tTags:
        aSugg.add(_getConjWithTags(sInfi, tTags, ":Iq", ":2s"))
        aSugg.add(_getConjWithTags(sInfi, tTags, ":Iq", ":3s"))
................................................................................
    "returns sWord modified by sSfx"
    if not sSfx:
        return ""
    if sSfx == "0":
        return sWord
    try:
        return sWord[:-(ord(sSfx[0])-48)] + sSfx[1:]  if sSfx[0] != '0'  else  sWord + sSfx[1:]  # 48 is the ASCII code for "0"
    except:
        return "## erreur, code : " + str(sSfx) + " ##"
        


class Verb ():


    def __init__ (self, sVerb, sVerbPattern=""):
        # conjugate a unknown verb with rules from sVerbPattern
        if not isinstance(sVerb, str):
            raise TypeError("sVerb should be a string")
        if not sVerb:
            raise ValueError("Empty string.")

................................................................................
        self._sRawInfo = getVtyp(sVerbPattern)
        self.sInfo = self._readableInfo()
        self.bProWithEn = (self._sRawInfo[5] == "e")
        self._tTags = _getTags(sVerbPattern)
        if not self._tTags:
            raise ValueError("Unknown verb.")
        self._tTagsAux = _getTags(self.sVerbAux)
        self.cGroup = self._sRawInfo[0];
        self.dConj = {
            ":Y": {
                "label": "Infinitif",
                ":": sVerb,
            },
            ":P": {
                "label": "Participe présent",
................................................................................
                sInfo = "# erreur - code : " + self._sRawInfo
            return sGroup + " · " + sInfo
        except:
            traceback.print_exc()
            return "# erreur"

    def infinitif (self, bPro, bNeg, bTpsCo, bInt, bFem):

        try:
            if bTpsCo:
                sInfi = self.sVerbAux  if not bPro  else  "être"
            else:
                sInfi = self.sVerb
            if bPro:
                if self.bProWithEn:
................................................................................
                sInfi += " … ?"
            return sInfi
        except:
            traceback.print_exc()
            return "# erreur"

    def participePasse (self, sWho):

        try:
            return self.dConj[":Q"][sWho]
        except:
            traceback.print_exc()
            return "# erreur"

    def participePresent (self, bPro, bNeg, bTpsCo, bInt, bFem):

        try:
            if not self.dConj[":P"][":"]:
                return ""
            if bTpsCo:
                sPartPre = _getConjWithTags(self.sVerbAux, self._tTagsAux, ":PQ", ":P")  if not bPro  else  getConj("être", ":PQ", ":P")
            else:
                sPartPre = self.dConj[":P"][":"]
................................................................................
                sPartPre += " … ?"
            return sPartPre
        except:
            traceback.print_exc()
            return "# erreur"

    def conjugue (self, sTemps, sWho, bPro, bNeg, bTpsCo, bInt, bFem):

        try:
            if not self.dConj[sTemps][sWho]:
                return ""
            if not bTpsCo and bInt and sWho == ":1s" and self.dConj[sTemps].get(":1ś", False):
                sWho = ":1ś"
            if bTpsCo:
                sConj = _getConjWithTags(self.sVerbAux, self._tTagsAux, sTemps, sWho)  if not bPro  else  getConj("être", sTemps, sWho)
................................................................................
                else:
                    sConj = _dProObjEl[sWho] + "en " + sConj
            if bNeg:
                sConj = "n’" + sConj  if bEli and not bPro  else  "ne " + sConj
            if bInt:
                if sWho == ":3s" and not _zNeedTeuph.search(sConj):
                    sConj += "-t"
                sConj += "-" + self._getPronom(sWho, bFem)
            else:
                if sWho == ":1s" and bEli and not bNeg and not bPro:
                    sConj = "j’" + sConj
                else:
                    sConj = self._getPronom(sWho, bFem) + " " + sConj
            if bNeg:
                sConj += " pas"
            if bTpsCo:
                sConj += " " + self._seekPpas(bPro, bFem, sWho.endswith("p") or self._sRawInfo[5] == "r")
            if bInt:
                sConj += " … ?"
            return sConj
        except:
            traceback.print_exc()
            return "# erreur"

    def _getPronom (self, sWho, bFem):
        try:
            if sWho == ":3s":
                if self._sRawInfo[5] == "r":
                    return "on"
                elif bFem:
                    return "elle"
            elif sWho == ":3p" and bFem:
................................................................................
                return "elles"
            return _dProSuj[sWho]
        except:
            traceback.print_exc()
            return "# erreur"

    def imperatif (self, sWho, bPro, bNeg, bTpsCo, bFem):

        try:
            if not self.dConj[":E"][sWho]:
                return ""
            if bTpsCo:
                sImpe = _getConjWithTags(self.sVerbAux, self._tTagsAux, ":E", sWho)  if not bPro  else  getConj(u"être", ":E", sWho)
            else:
                sImpe = self.dConj[":E"][sWho]
>
|
>
>







 







>







 







>


|







 







>







 







|

|



>
>







 







|







 







>







 







>







>







 







>







 







|




|











|







 







>







1
2
3
4
5
6
7
8
9
10
11
..
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
..
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
...
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
...
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
...
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
...
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
...
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
...
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
...
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
...
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
"""
Grammalecte - Conjugueur
"""

# License: GPL 3

import re
import traceback

from .conj_data import lVtyp as _lVtyp
from .conj_data import lTags as _lTags
................................................................................
_dGroup = { "0": "auxiliaire", "1": "1ᵉʳ groupe", "2": "2ᵉ groupe", "3": "3ᵉ groupe" }

_dTenseIdx = { ":PQ": 0, ":Ip": 1, ":Iq": 2, ":Is": 3, ":If": 4, ":K": 5, ":Sp": 6, ":Sq": 7, ":E": 8 }



def isVerb (sVerb):
    "return True if it’s a existing verb"
    return sVerb in _dVerb


def getConj (sVerb, sTense, sWho):
    "returns conjugation (can be an empty string)"
    if sVerb not in _dVerb:
        return None
................................................................................
    "returns raw informations about sVerb"
    if sVerb not in _dVerb:
        return None
    return _lVtyp[_dVerb[sVerb][0]]


def getSimil (sWord, sMorph, bSubst=False):
    "returns a set of verbal forms similar to <sWord>, according to <sMorph>"
    if ":V" not in sMorph:
        return set()
    sInfi = sMorph[1:sMorph.find("/")]
    aSugg = set()
    tTags = _getTags(sInfi)
    if tTags:
        if not bSubst:
            # we suggest conjugated forms
            if ":V1" in sMorph:
                aSugg.add(sInfi)
................................................................................
            # if there is only one past participle (epi inv), unreliable.
            if len(aSugg) == 1:
                aSugg.clear()
    return aSugg


def getConjSimilInfiV1 (sInfi):
    "returns verbal forms phonetically similar to infinitive form (for verb in group 1)"
    if sInfi not in _dVerb:
        return set()
    aSugg = set()
    tTags = _getTags(sInfi)
    if tTags:
        aSugg.add(_getConjWithTags(sInfi, tTags, ":Iq", ":2s"))
        aSugg.add(_getConjWithTags(sInfi, tTags, ":Iq", ":3s"))
................................................................................
    "returns sWord modified by sSfx"
    if not sSfx:
        return ""
    if sSfx == "0":
        return sWord
    try:
        return sWord[:-(ord(sSfx[0])-48)] + sSfx[1:]  if sSfx[0] != '0'  else  sWord + sSfx[1:]  # 48 is the ASCII code for "0"
    except (IndexError, TypeError):
        return "## erreur, code : " + str(sSfx) + " ##"



class Verb ():
    "Verb and its conjugation"

    def __init__ (self, sVerb, sVerbPattern=""):
        # conjugate a unknown verb with rules from sVerbPattern
        if not isinstance(sVerb, str):
            raise TypeError("sVerb should be a string")
        if not sVerb:
            raise ValueError("Empty string.")

................................................................................
        self._sRawInfo = getVtyp(sVerbPattern)
        self.sInfo = self._readableInfo()
        self.bProWithEn = (self._sRawInfo[5] == "e")
        self._tTags = _getTags(sVerbPattern)
        if not self._tTags:
            raise ValueError("Unknown verb.")
        self._tTagsAux = _getTags(self.sVerbAux)
        self.cGroup = self._sRawInfo[0]
        self.dConj = {
            ":Y": {
                "label": "Infinitif",
                ":": sVerb,
            },
            ":P": {
                "label": "Participe présent",
................................................................................
                sInfo = "# erreur - code : " + self._sRawInfo
            return sGroup + " · " + sInfo
        except:
            traceback.print_exc()
            return "# erreur"

    def infinitif (self, bPro, bNeg, bTpsCo, bInt, bFem):
        "returns string (conjugaison à l’infinitif)"
        try:
            if bTpsCo:
                sInfi = self.sVerbAux  if not bPro  else  "être"
            else:
                sInfi = self.sVerb
            if bPro:
                if self.bProWithEn:
................................................................................
                sInfi += " … ?"
            return sInfi
        except:
            traceback.print_exc()
            return "# erreur"

    def participePasse (self, sWho):
        "returns past participle according to <sWho>"
        try:
            return self.dConj[":Q"][sWho]
        except:
            traceback.print_exc()
            return "# erreur"

    def participePresent (self, bPro, bNeg, bTpsCo, bInt, bFem):
        "returns string (conjugaison du participe présent)"
        try:
            if not self.dConj[":P"][":"]:
                return ""
            if bTpsCo:
                sPartPre = _getConjWithTags(self.sVerbAux, self._tTagsAux, ":PQ", ":P")  if not bPro  else  getConj("être", ":PQ", ":P")
            else:
                sPartPre = self.dConj[":P"][":"]
................................................................................
                sPartPre += " … ?"
            return sPartPre
        except:
            traceback.print_exc()
            return "# erreur"

    def conjugue (self, sTemps, sWho, bPro, bNeg, bTpsCo, bInt, bFem):
        "returns string (conjugue le verbe au temps <sTemps> pour <sWho>) "
        try:
            if not self.dConj[sTemps][sWho]:
                return ""
            if not bTpsCo and bInt and sWho == ":1s" and self.dConj[sTemps].get(":1ś", False):
                sWho = ":1ś"
            if bTpsCo:
                sConj = _getConjWithTags(self.sVerbAux, self._tTagsAux, sTemps, sWho)  if not bPro  else  getConj("être", sTemps, sWho)
................................................................................
                else:
                    sConj = _dProObjEl[sWho] + "en " + sConj
            if bNeg:
                sConj = "n’" + sConj  if bEli and not bPro  else  "ne " + sConj
            if bInt:
                if sWho == ":3s" and not _zNeedTeuph.search(sConj):
                    sConj += "-t"
                sConj += "-" + self._getPronomSujet(sWho, bFem)
            else:
                if sWho == ":1s" and bEli and not bNeg and not bPro:
                    sConj = "j’" + sConj
                else:
                    sConj = self._getPronomSujet(sWho, bFem) + " " + sConj
            if bNeg:
                sConj += " pas"
            if bTpsCo:
                sConj += " " + self._seekPpas(bPro, bFem, sWho.endswith("p") or self._sRawInfo[5] == "r")
            if bInt:
                sConj += " … ?"
            return sConj
        except:
            traceback.print_exc()
            return "# erreur"

    def _getPronomSujet (self, sWho, bFem):
        try:
            if sWho == ":3s":
                if self._sRawInfo[5] == "r":
                    return "on"
                elif bFem:
                    return "elle"
            elif sWho == ":3p" and bFem:
................................................................................
                return "elles"
            return _dProSuj[sWho]
        except:
            traceback.print_exc()
            return "# erreur"

    def imperatif (self, sWho, bPro, bNeg, bTpsCo, bFem):
        "returns string (conjugaison à l’impératif)"
        try:
            if not self.dConj[":E"][sWho]:
                return ""
            if bTpsCo:
                sImpe = _getConjWithTags(self.sVerbAux, self._tTagsAux, ":E", sWho)  if not bPro  else  getConj(u"être", ":E", sWho)
            else:
                sImpe = self.dConj[":E"][sWho]

Modified gc_lang/fr/modules/conj_generator.py from [2e696a65e3] to [ee0a228497].


1
2

3
4
5
6
7

8
9
10
11
12
13
14
15
16
17

18
19
20
21
22
23
24
25
26
27
28
29
30

31
32
33
34
35
36
37
...
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127

# Conjugation generator
# beta stage, unfinished, the root for a new way to generate flexions…


import re


def conjugate (sVerb, sVerbTag="i_____a", bVarPpas=True):

    lConj = []
    cGroup = getVerbGroupChar(sVerb)
    for nCut, sAdd, sFlexTags, sPattern in getConjRules(sVerb, bVarPpas):
        if not sPattern or re.search(sPattern, sVerb):
            sFlexion = sVerb[0:-nCut] + sAdd  if nCut  else sVerb + sAdd
            lConj.append((sFlexion, ":V" + cGroup + "_" + sVerbTag + sFlexTags))
    return lConj


def getVerbGroupChar (sVerb, ):

    sVerb = sVerb.lower()
    if sVerb.endswith("er"):
        return "1"
    if sVerb.endswith("ir"):
        return "2"
    if sVerb == "être" or sVerb == "avoir":
        return "0"
    if sVerb.endswith("re"):
        return "3"
    return "4"


def getConjRules (sVerb, bVarPpas=True, nGroup=2):

    if sVerb.endswith("er"):
        # premier groupe, conjugaison en fonction de la terminaison du lemme
        # 5 lettres
        if sVerb[-5:] in oConj["V1"]:
            lConj = list(oConj["V1"][sVerb[-5:]])
        # 4 lettres
        elif sVerb[-4:] in oConj["V1"]:
................................................................................
        [2,     "isses",        ":Sp:Sq:2s/*",      False],
        [2,     "isse",         ":Sp:3s/*",         False],
        [2,     "ît",           ":Sq:3s/*",         False],
        [2,     "is",           ":E:2s/*",          False],
        [2,     "issons",       ":E:1p/*",          False],
        [2,     "issez",        ":E:2p/*",          False]
    ],
    
    # premier groupe (bien plus irrégulier que prétendu)
    "V1": {
        # a
        # verbes en -er, -ger, -yer, -cer
        "er": [
            [2,      "er",        ":Y/*",               False],
            [2,      "ant",       ":P/*",               False],
>
|
|
>





>









|
>













>







 







|







1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
...
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
"""
Conjugation generator
beta stage, unfinished, the root for a new way to generate flexions…
"""

import re


def conjugate (sVerb, sVerbTag="i_____a", bVarPpas=True):
    "conjugate <sVerb> and returns a list of tuples (conjugation form, tags)"
    lConj = []
    cGroup = getVerbGroupChar(sVerb)
    for nCut, sAdd, sFlexTags, sPattern in getConjRules(sVerb, bVarPpas):
        if not sPattern or re.search(sPattern, sVerb):
            sFlexion = sVerb[0:-nCut] + sAdd  if nCut  else sVerb + sAdd
            lConj.append((sFlexion, ":V" + cGroup + "_" + sVerbTag + sFlexTags))
    return lConj


def getVerbGroupChar (sVerb):
    "returns the group number of <sVerb> guessing on its ending"
    sVerb = sVerb.lower()
    if sVerb.endswith("er"):
        return "1"
    if sVerb.endswith("ir"):
        return "2"
    if sVerb == "être" or sVerb == "avoir":
        return "0"
    if sVerb.endswith("re"):
        return "3"
    return "4"


def getConjRules (sVerb, bVarPpas=True, nGroup=2):
    "returns a list of lists to conjugate a verb, guessing on its ending"
    if sVerb.endswith("er"):
        # premier groupe, conjugaison en fonction de la terminaison du lemme
        # 5 lettres
        if sVerb[-5:] in oConj["V1"]:
            lConj = list(oConj["V1"][sVerb[-5:]])
        # 4 lettres
        elif sVerb[-4:] in oConj["V1"]:
................................................................................
        [2,     "isses",        ":Sp:Sq:2s/*",      False],
        [2,     "isse",         ":Sp:3s/*",         False],
        [2,     "ît",           ":Sq:3s/*",         False],
        [2,     "is",           ":E:2s/*",          False],
        [2,     "issons",       ":E:1p/*",          False],
        [2,     "issez",        ":E:2p/*",          False]
    ],

    # premier groupe (bien plus irrégulier que prétendu)
    "V1": {
        # a
        # verbes en -er, -ger, -yer, -cer
        "er": [
            [2,      "er",        ":Y/*",               False],
            [2,      "ant",       ":P/*",               False],

Modified gc_lang/fr/modules/cregex.py from [a0df0d1397] to [4b9e99ff72].


1

2
3
4
5
6
7
8
9
10
11
12
13
..
76
77
78
79
80
81
82

83
84
85

86
87
88
89
90
91
92
..
95
96
97
98
99
100
101

102
103
104
105
106
107
108
...
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133

134
135
136

137
138
139

140
141
142

143
144
145

146
147
148

149
150
151

152
153
154

155
156
157

158
159
160

161
162
163

164
165
166
167
168
169

170
171
172

173
174
175

176
177
178

179
180
181

182
183
184
185
186
187
188
189
190

191
192
193

194
195
196

197
198
199

200
201
202

203
204
205

206
207
208

209
210
211

212
213
214

215
216
217

218
219
220

221
222
223

224
225
226

# Grammalecte - Compiled regular expressions


import re

#### Lemme
Lemma = re.compile("^>(\w[\w-]*)")

#### Analyses
Gender = re.compile(":[mfe]")
Number = re.compile(":[spi]")

#### Nom et adjectif
NA = re.compile(":[NA]")
................................................................................
NPf = re.compile(":(?:M[12P]|T):f")
NPe = re.compile(":(?:M[12P]|T):e")


#### FONCTIONS

def getLemmaOfMorph (s):

    return Lemma.search(s).group(1)

def checkAgreement (l1, l2):

    # check number agreement
    if not mbInv(l1) and not mbInv(l2):
        if mbSg(l1) and not mbSg(l2):
            return False
        if mbPl(l1) and not mbPl(l2):
            return False
    # check gender agreement
................................................................................
    if mbMas(l1) and not mbMas(l2):
        return False
    if mbFem(l1) and not mbFem(l2):
        return False
    return True

def checkConjVerb (lMorph, sReqConj):

    return any(sReqConj in s  for s in lMorph)

def getGender (lMorph):
    "returns gender of word (':m', ':f', ':e' or empty string)."
    sGender = ""
    for sMorph in lMorph:
        m = Gender.search(sMorph)
................................................................................
                return ":e"
    return sGender

def getNumber (lMorph):
    "returns number of word (':s', ':p', ':i' or empty string)."
    sNumber = ""
    for sMorph in lMorph:
        m = Number.search(sWord)
        if m:
            if not sNumber:
                sNumber = m.group(0)
            elif sNumber != m.group(0):
                return ":i"
    return sNumber

# NOTE :  isWhat (lMorph)    returns True   if lMorph contains nothing else than What
#         mbWhat (lMorph)    returns True   if lMorph contains What at least once

## isXXX = it’s certain

def isNom (lMorph):

    return all(":N" in s  for s in lMorph)

def isNomNotAdj (lMorph):

    return all(NnotA.search(s)  for s in lMorph)

def isAdj (lMorph):

    return all(":A" in s  for s in lMorph)

def isNomAdj (lMorph):

    return all(NA.search(s)  for s in lMorph)

def isNomVconj (lMorph):

    return all(NVconj.search(s)  for s in lMorph)

def isInv (lMorph):

    return all(":i" in s  for s in lMorph)

def isSg (lMorph):

    return all(":s" in s  for s in lMorph)

def isPl (lMorph):

    return all(":p" in s  for s in lMorph)

def isEpi (lMorph):

    return all(":e" in s  for s in lMorph)

def isMas (lMorph):

    return all(":m" in s  for s in lMorph)

def isFem (lMorph):

    return all(":f" in s  for s in lMorph)


## mbXXX = MAYBE XXX

def mbNom (lMorph):

    return any(":N" in s  for s in lMorph)

def mbAdj (lMorph):

    return any(":A" in s  for s in lMorph)

def mbAdjNb (lMorph):

    return any(AD.search(s)  for s in lMorph)

def mbNomAdj (lMorph):

    return any(NA.search(s)  for s in lMorph)

def mbNomNotAdj (lMorph):

    b = False
    for s in lMorph:
        if ":A" in s:
            return False
        if ":N" in s:
            b = True
    return b

def mbPpasNomNotAdj (lMorph):

    return any(PNnotA.search(s)  for s in lMorph)

def mbVconj (lMorph):

    return any(Vconj.search(s)  for s in lMorph)

def mbVconj123 (lMorph):

    return any(Vconj123.search(s)  for s in lMorph)

def mbMG (lMorph):

    return any(":G" in s  for s in lMorph)

def mbInv (lMorph):

    return any(":i" in s  for s in lMorph)

def mbSg (lMorph):

    return any(":s" in s  for s in lMorph)

def mbPl (lMorph):

    return any(":p" in s  for s in lMorph)

def mbEpi (lMorph):

    return any(":e" in s  for s in lMorph)

def mbMas (lMorph):

    return any(":m" in s  for s in lMorph)

def mbFem (lMorph):

    return any(":f" in s  for s in lMorph)

def mbNpr (lMorph):

    return any(NP.search(s)  for s in lMorph)

def mbNprMasNotFem (lMorph):

    if any(NPf.search(s)  for s in lMorph):
        return False
    return any(NPm.search(s)  for s in lMorph)
>
|
>




|







 







>



>







 







>







 







|













>



>



>



>



>



>



>



>



>



>



>






>



>



>



>



>
|




|
|


>



>



>



>



>



>



>



>



>



>



>



>



1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
..
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
..
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
...
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
"""
Grammalecte - Compiled regular expressions
"""

import re

#### Lemme
Lemma = re.compile(r"^>(\w[\w-]*)")

#### Analyses
Gender = re.compile(":[mfe]")
Number = re.compile(":[spi]")

#### Nom et adjectif
NA = re.compile(":[NA]")
................................................................................
NPf = re.compile(":(?:M[12P]|T):f")
NPe = re.compile(":(?:M[12P]|T):e")


#### FONCTIONS

def getLemmaOfMorph (s):
    "return lemma in morphology <s>"
    return Lemma.search(s).group(1)

def checkAgreement (l1, l2):
    "returns True if agreement in gender and number is possible between morphologies <l1> and <l2>"
    # check number agreement
    if not mbInv(l1) and not mbInv(l2):
        if mbSg(l1) and not mbSg(l2):
            return False
        if mbPl(l1) and not mbPl(l2):
            return False
    # check gender agreement
................................................................................
    if mbMas(l1) and not mbMas(l2):
        return False
    if mbFem(l1) and not mbFem(l2):
        return False
    return True

def checkConjVerb (lMorph, sReqConj):
    "returns True if <sReqConj> in <lMorph>"
    return any(sReqConj in s  for s in lMorph)

def getGender (lMorph):
    "returns gender of word (':m', ':f', ':e' or empty string)."
    sGender = ""
    for sMorph in lMorph:
        m = Gender.search(sMorph)
................................................................................
                return ":e"
    return sGender

def getNumber (lMorph):
    "returns number of word (':s', ':p', ':i' or empty string)."
    sNumber = ""
    for sMorph in lMorph:
        m = Number.search(sMorph)
        if m:
            if not sNumber:
                sNumber = m.group(0)
            elif sNumber != m.group(0):
                return ":i"
    return sNumber

# NOTE :  isWhat (lMorph)    returns True   if lMorph contains nothing else than What
#         mbWhat (lMorph)    returns True   if lMorph contains What at least once

## isXXX = it’s certain

def isNom (lMorph):
    "returns True if all morphologies are “nom”"
    return all(":N" in s  for s in lMorph)

def isNomNotAdj (lMorph):
    "returns True if all morphologies are “nom”, but not “adjectif”"
    return all(NnotA.search(s)  for s in lMorph)

def isAdj (lMorph):
    "returns True if all morphologies are “adjectif”"
    return all(":A" in s  for s in lMorph)

def isNomAdj (lMorph):
    "returns True if all morphologies are “nom” or “adjectif”"
    return all(NA.search(s)  for s in lMorph)

def isNomVconj (lMorph):
    "returns True if all morphologies are “nom” or “verbe conjugué”"
    return all(NVconj.search(s)  for s in lMorph)

def isInv (lMorph):
    "returns True if all morphologies are “invariable”"
    return all(":i" in s  for s in lMorph)

def isSg (lMorph):
    "returns True if all morphologies are “singulier”"
    return all(":s" in s  for s in lMorph)

def isPl (lMorph):
    "returns True if all morphologies are “pluriel”"
    return all(":p" in s  for s in lMorph)

def isEpi (lMorph):
    "returns True if all morphologies are “épicène”"
    return all(":e" in s  for s in lMorph)

def isMas (lMorph):
    "returns True if all morphologies are “masculin”"
    return all(":m" in s  for s in lMorph)

def isFem (lMorph):
    "returns True if all morphologies are “féminin”"
    return all(":f" in s  for s in lMorph)


## mbXXX = MAYBE XXX

def mbNom (lMorph):
    "returns True if one morphology is “nom”"
    return any(":N" in s  for s in lMorph)

def mbAdj (lMorph):
    "returns True if one morphology is “adjectif”"
    return any(":A" in s  for s in lMorph)

def mbAdjNb (lMorph):
    "returns True if one morphology is “adjectif” or “nombre”"
    return any(AD.search(s)  for s in lMorph)

def mbNomAdj (lMorph):
    "returns True if one morphology is “nom” or “adjectif”"
    return any(NA.search(s)  for s in lMorph)

def mbNomNotAdj (lMorph):
    "returns True if one morphology is “nom”, but not “adjectif”"
    bResult = False
    for s in lMorph:
        if ":A" in s:
            return False
        if ":N" in s:
            bResult = True
    return bResult

def mbPpasNomNotAdj (lMorph):
    "returns True if one morphology is “nom” or “participe passé”, but not “adjectif”"
    return any(PNnotA.search(s)  for s in lMorph)

def mbVconj (lMorph):
    "returns True if one morphology is “nom” or “verbe conjugué”"
    return any(Vconj.search(s)  for s in lMorph)

def mbVconj123 (lMorph):
    "returns True if one morphology is “nom” or “verbe conjugué” (but not “avoir” or “être”)"
    return any(Vconj123.search(s)  for s in lMorph)

def mbMG (lMorph):
    "returns True if one morphology is “mot grammatical”"
    return any(":G" in s  for s in lMorph)

def mbInv (lMorph):
    "returns True if one morphology is “invariable”"
    return any(":i" in s  for s in lMorph)

def mbSg (lMorph):
    "returns True if one morphology is “singulier”"
    return any(":s" in s  for s in lMorph)

def mbPl (lMorph):
    "returns True if one morphology is “pluriel”"
    return any(":p" in s  for s in lMorph)

def mbEpi (lMorph):
    "returns True if one morphology is “épicène”"
    return any(":e" in s  for s in lMorph)

def mbMas (lMorph):
    "returns True if one morphology is “masculin”"
    return any(":m" in s  for s in lMorph)

def mbFem (lMorph):
    "returns True if one morphology is “féminin”"
    return any(":f" in s  for s in lMorph)

def mbNpr (lMorph):
    "returns True if one morphology is “nom propre” or “titre de civilité”"
    return any(NP.search(s)  for s in lMorph)

def mbNprMasNotFem (lMorph):
    "returns True if one morphology is “nom propre masculin” but not “féminin”"
    if any(NPf.search(s)  for s in lMorph):
        return False
    return any(NPm.search(s)  for s in lMorph)

Modified gc_lang/fr/modules/gce_analyseur.py from [39975de0ac] to [252fe3713f].

1
2
3
4
5







6
7

8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41

42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
..
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100

101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
#### GRAMMAR CHECKING ENGINE PLUGIN: Parsing functions for French language

from . import cregex as cr









def rewriteSubject (s1, s2):
    # s1 is supposed to be prn/patr/npr (M[12P])

    if s2 == "lui":
        return "ils"
    if s2 == "moi":
        return "nous"
    if s2 == "toi":
        return "vous"
    if s2 == "nous":
        return "nous"
    if s2 == "vous":
        return "vous"
    if s2 == "eux":
        return "ils"
    if s2 == "elle" or s2 == "elles":
        # We don’t check if word exists in _dAnalyses, for it is assumed it has been done before
        if cr.mbNprMasNotFem(_dAnalyses.get(s1, False)):
            return "ils"
        # si épicène, indéterminable, mais OSEF, le féminin l’emporte
        return "elles"
    return s1 + " et " + s2


def apposition (sWord1, sWord2):
    "returns True if nom + nom (no agreement required)"
    # We don’t check if word exists in _dAnalyses, for it is assumed it has been done before
    return cr.mbNomNotAdj(_dAnalyses.get(sWord2, False)) and cr.mbPpasNomNotAdj(_dAnalyses.get(sWord1, False))


def isAmbiguousNAV (sWord):
    "words which are nom|adj and verb are ambiguous (except être and avoir)"
    if sWord not in _dAnalyses and not _storeMorphFromFSA(sWord):
        return False
    if not cr.mbNomAdj(_dAnalyses[sWord]) or sWord == "est":
        return False
    if cr.mbVconj(_dAnalyses[sWord]) and not cr.mbMG(_dAnalyses[sWord]):

        return True
    return False


def isAmbiguousAndWrong (sWord1, sWord2, sReqMorphNA, sReqMorphConj):
    "use it if sWord1 won’t be a verb; word2 is assumed to be True via isAmbiguousNAV"
    # We don’t check if word exists in _dAnalyses, for it is assumed it has been done before
    a2 = _dAnalyses.get(sWord2, None)
    if not a2:
        return False
    if cr.checkConjVerb(a2, sReqMorphConj):
        # verb word2 is ok
        return False
    a1 = _dAnalyses.get(sWord1, None)
    if not a1:
        return False
    if cr.checkAgreement(a1, a2) and (cr.mbAdj(a2) or cr.mbAdj(a1)):
        return False
    return True


def isVeryAmbiguousAndWrong (sWord1, sWord2, sReqMorphNA, sReqMorphConj, bLastHopeCond):
    "use it if sWord1 can be also a verb; word2 is assumed to be True via isAmbiguousNAV"
    # We don’t check if word exists in _dAnalyses, for it is assumed it has been done before
    a2 = _dAnalyses.get(sWord2, None)
    if not a2:
        return False
    if cr.checkConjVerb(a2, sReqMorphConj):
        # verb word2 is ok
        return False
    a1 = _dAnalyses.get(sWord1, None)
    if not a1:
        return False
    if cr.checkAgreement(a1, a2) and (cr.mbAdj(a2) or cr.mbAdjNb(a1)):
        return False
    # now, we know there no agreement, and conjugation is also wrong
    if cr.isNomAdj(a1):
        return True
................................................................................
    #if cr.isNomAdjVerb(a1): # considered True
    if bLastHopeCond:
        return True
    return False


def checkAgreement (sWord1, sWord2):
    # We don’t check if word exists in _dAnalyses, for it is assumed it has been done before
    a2 = _dAnalyses.get(sWord2, None)
    if not a2:
        return True
    a1 = _dAnalyses.get(sWord1, None)
    if not a1:
        return True
    return cr.checkAgreement(a1, a2)


_zUnitSpecial = re.compile("[µ/⁰¹²³⁴⁵⁶⁷⁸⁹Ωℓ·]")
_zUnitNumbers = re.compile("[0-9]")

def mbUnit (s):

    if _zUnitSpecial.search(s):
        return True
    if 1 < len(s) < 16 and s[0:1].islower() and (not s[1:].islower() or _zUnitNumbers.search(s)):
        return True
    return False


#### Syntagmes

_zEndOfNG1 = re.compile(" *$| +(?:, +|)(?:n(?:’|e |o(?:u?s|tre) )|l(?:’|e(?:urs?|s|) |a )|j(?:’|e )|m(?:’|es? |a |on )|t(?:’|es? |a |u )|s(?:’|es? |a )|c(?:’|e(?:t|tte|s|) )|ç(?:a |’)|ils? |vo(?:u?s|tre) )")
_zEndOfNG2 = re.compile(r" +(\w[\w-]+)")
_zEndOfNG3 = re.compile(r" *, +(\w[\w-]+)")

def isEndOfNG (dDA, s, iOffset):
    if _zEndOfNG1.match(s):
        return True
    m = _zEndOfNG2.match(s)
    if m and morphex(dDA, (iOffset+m.start(1), m.group(1)), ":[VR]", ":[NAQP]"):
        return True
    m = _zEndOfNG3.match(s)
    if m and not morph(dDA, (iOffset+m.start(1), m.group(1)), ":[NA]", False):
        return True
    return False


_zNextIsNotCOD1 = re.compile(" *,")
_zNextIsNotCOD2 = re.compile(" +(?:[mtsnj](e +|’)|[nv]ous |tu |ils? |elles? )")
_zNextIsNotCOD3 = re.compile(r" +([a-zéèî][\w-]+)")

def isNextNotCOD (dDA, s, iOffset):
    if _zNextIsNotCOD1.match(s) or _zNextIsNotCOD2.match(s):
        return True
    m = _zNextIsNotCOD3.match(s)
    if m and morphex(dDA, (iOffset+m.start(1), m.group(1)), ":[123][sp]", ":[DM]"):
        return True
    return False


_zNextIsVerb1 = re.compile(" +[nmts](?:e |’)")
_zNextIsVerb2 = re.compile(r" +(\w[\w-]+)")

def isNextVerb (dDA, s, iOffset):
    if _zNextIsVerb1.match(s):
        return True
    m = _zNextIsVerb2.match(s)
    if m and morph(dDA, (iOffset+m.start(1), m.group(1)), ":[123][sp]", False):
        return True
    return False


#### Exceptions

aREGULARPLURAL = frozenset(["abricot", "amarante", "aubergine", "acajou", "anthracite", "brique", "caca", "café", \
                            "carotte", "cerise", "chataigne", "corail", "citron", "crème", "grave", "groseille", \
                            "jonquille", "marron", "olive", "pervenche", "prune", "sable"])
aSHOULDBEVERB = frozenset(["aller", "manger"]) 





>
>
>
>
>
>
>

<
>













<
|








|
<




|
<
|

<
>





|
|
<





|








|
|
<





|







 







|
|


|









>






<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<






|
1
2
3
4
5
6
7
8
9
10
11
12
13

14
15
16
17
18
19
20
21
22
23
24
25
26
27

28
29
30
31
32
33
34
35
36
37

38
39
40
41
42

43
44

45
46
47
48
49
50
51
52

53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68

69
70
71
72
73
74
75
76
77
78
79
80
81
..
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109











































110
111
112
113
114
115
116
#### GRAMMAR CHECKING ENGINE PLUGIN: Parsing functions for French language

from . import cregex as cr


def g_morphVC (dToken, sPattern, sNegPattern=""):
    nEnd = dToken["sValue"].rfind("-")
    if "-t-" in dToken["sValue"]:
        nEnd = nEnd - 2
    return g_morph(dToken, sPattern, sNegPattern, 0, nEnd, False)


def rewriteSubject (s1, s2):

    "rewrite complex subject: <s1> a prn/patr/npr (M[12P]) followed by “et” and <s2>"
    if s2 == "lui":
        return "ils"
    if s2 == "moi":
        return "nous"
    if s2 == "toi":
        return "vous"
    if s2 == "nous":
        return "nous"
    if s2 == "vous":
        return "vous"
    if s2 == "eux":
        return "ils"
    if s2 == "elle" or s2 == "elles":

        if cr.mbNprMasNotFem(_oSpellChecker.getMorph(s1)):
            return "ils"
        # si épicène, indéterminable, mais OSEF, le féminin l’emporte
        return "elles"
    return s1 + " et " + s2


def apposition (sWord1, sWord2):
    "returns True if nom + nom (no agreement required)"
    return len(sWord2) < 2 or (cr.mbNomNotAdj(_oSpellChecker.getMorph(sWord2)) and cr.mbPpasNomNotAdj(_oSpellChecker.getMorph(sWord1)))



def isAmbiguousNAV (sWord):
    "words which are nom|adj and verb are ambiguous (except être and avoir)"
    lMorph = _oSpellChecker.getMorph(sWord)

    if not cr.mbNomAdj(lMorph) or sWord == "est":
        return False

    if cr.mbVconj(lMorph) and not cr.mbMG(lMorph):
        return True
    return False


def isAmbiguousAndWrong (sWord1, sWord2, sReqMorphNA, sReqMorphConj):
    "use it if <sWord1> won’t be a verb; <sWord2> is assumed to be True via isAmbiguousNAV"
    a2 = _oSpellChecker.getMorph(sWord2)

    if not a2:
        return False
    if cr.checkConjVerb(a2, sReqMorphConj):
        # verb word2 is ok
        return False
    a1 = _oSpellChecker.getMorph(sWord1)
    if not a1:
        return False
    if cr.checkAgreement(a1, a2) and (cr.mbAdj(a2) or cr.mbAdj(a1)):
        return False
    return True


def isVeryAmbiguousAndWrong (sWord1, sWord2, sReqMorphNA, sReqMorphConj, bLastHopeCond):
    "use it if <sWord1> can be also a verb; <sWord2> is assumed to be True via isAmbiguousNAV"
    a2 = _oSpellChecker.getMorph(sWord2)

    if not a2:
        return False
    if cr.checkConjVerb(a2, sReqMorphConj):
        # verb word2 is ok
        return False
    a1 = _oSpellChecker.getMorph(sWord1)
    if not a1:
        return False
    if cr.checkAgreement(a1, a2) and (cr.mbAdj(a2) or cr.mbAdjNb(a1)):
        return False
    # now, we know there no agreement, and conjugation is also wrong
    if cr.isNomAdj(a1):
        return True
................................................................................
    #if cr.isNomAdjVerb(a1): # considered True
    if bLastHopeCond:
        return True
    return False


def checkAgreement (sWord1, sWord2):
    "check agreement between <sWord1> and <sWord1>"
    a2 = _oSpellChecker.getMorph(sWord2)
    if not a2:
        return True
    a1 = _oSpellChecker.getMorph(sWord1)
    if not a1:
        return True
    return cr.checkAgreement(a1, a2)


_zUnitSpecial = re.compile("[µ/⁰¹²³⁴⁵⁶⁷⁸⁹Ωℓ·]")
_zUnitNumbers = re.compile("[0-9]")

def mbUnit (s):
    "returns True it can be a measurement unit"
    if _zUnitSpecial.search(s):
        return True
    if 1 < len(s) < 16 and s[0:1].islower() and (not s[1:].islower() or _zUnitNumbers.search(s)):
        return True
    return False













































#### Exceptions

aREGULARPLURAL = frozenset(["abricot", "amarante", "aubergine", "acajou", "anthracite", "brique", "caca", "café", \
                            "carotte", "cerise", "chataigne", "corail", "citron", "crème", "grave", "groseille", \
                            "jonquille", "marron", "olive", "pervenche", "prune", "sable"])
aSHOULDBEVERB = frozenset(["aller", "manger"])

Modified gc_lang/fr/modules/gce_date_verif.py from [1265100649] to [c3f0a9eb2f].

4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37

38
39
40

41
42
43
44
45
46
47
48
49
50
51
52
53

_lDay = ["lundi", "mardi", "mercredi", "jeudi", "vendredi", "samedi", "dimanche"]
_dMonth = { "janvier":1, "février":2, "mars":3, "avril":4, "mai":5, "juin":6, "juillet":7, "août":8, "aout":8, "septembre":9, "octobre":10, "novembre":11, "décembre":12 }

import datetime


def checkDate (day, month, year):
    "to use if month is a number"
    try:
        return datetime.date(int(year), int(month), int(day))
    except ValueError:
        return False
    except:
        return True


def checkDateWithString (day, month, year):
    "to use if month is a noun"
    try:
        return datetime.date(int(year), _dMonth.get(month.lower(), ""), int(day))
    except ValueError:
        return False
    except:
        return True


def checkDay (weekday, day, month, year):
    "to use if month is a number"
    oDate = checkDate(day, month, year)
    if oDate and _lDay[oDate.weekday()] != weekday.lower():
        return False
    return True


def checkDayWithString (weekday, day, month, year):
    "to use if month is a noun"
    oDate = checkDate(day, _dMonth.get(month, ""), year)

    if oDate and _lDay[oDate.weekday()] != weekday.lower():
        return False
    return True


def getDay (day, month, year):
    "to use if month is a number"
    return _lDay[datetime.date(int(year), int(month), int(day)).weekday()]


def getDayWithString (day, month, year):
    "to use if month is a noun"
    return _lDay[datetime.date(int(year), _dMonth.get(month.lower(), ""), int(day)).weekday()]







|
|

|






|
|

|






|
|
|
|



>
|
|
<
>
|




|
|
|


|
|
|
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40

41
42
43
44
45
46
47
48
49
50
51
52
53
54

_lDay = ["lundi", "mardi", "mercredi", "jeudi", "vendredi", "samedi", "dimanche"]
_dMonth = { "janvier":1, "février":2, "mars":3, "avril":4, "mai":5, "juin":6, "juillet":7, "août":8, "aout":8, "septembre":9, "octobre":10, "novembre":11, "décembre":12 }

import datetime


def checkDate (sDay, sMonth, sYear):
    "to use if <sMonth> is a number"
    try:
        return datetime.date(int(sYear), int(sMonth), int(sDay))
    except ValueError:
        return False
    except:
        return True


def checkDateWithString (sDay, sMonth, sYear):
    "to use if <sMonth> is a noun"
    try:
        return datetime.date(int(sYear), _dMonth.get(sMonth.lower(), ""), int(sDay))
    except ValueError:
        return False
    except:
        return True


def checkDay (sWeekday, sDay, sMonth, sYear):
    "to use if <sMonth> is a number"
    oDate = checkDate(sDay, sMonth, sYear)
    if oDate and _lDay[oDate.weekday()] != sWeekday.lower():
        return False
    return True


def checkDayWithString (sWeekday, sDay, sMonth, sYear):
    "to use if <sMonth> is a noun"

    oDate = checkDate(sDay, _dMonth.get(sMonth, ""), sYear)
    if oDate and _lDay[oDate.weekday()] != sWeekday.lower():
        return False
    return True


def getDay (sDay, sMonth, sYear):
    "to use if <sMonth> is a number"
    return _lDay[datetime.date(int(sYear), int(sMonth), int(sDay)).weekday()]


def getDayWithString (sDay, sMonth, sYear):
    "to use if <sMonth> is a noun"
    return _lDay[datetime.date(int(sYear), _dMonth.get(sMonth.lower(), ""), int(sDay)).weekday()]

Modified gc_lang/fr/modules/gce_suggestions.py from [79835965e4] to [00833488a6].

3
4
5
6
7
8
9












10



11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
..
32
33
34
35
36
37
38


39
40
41
42
43

44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86

87
88
89
90
91
92
93
94
95
96



97
98
99
100
101
102
103
104
105
106
107


108
109
110
111
112

113
114
115
116
117
118
119
120
121

122
123
124
125
126
127
128
...
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
...
145
146
147
148
149
150
151
152

153
154
155
156
157
158
159
160
161
...
189
190
191
192
193
194
195
196
197
198

199
200
201
202
203
204
205
...
217
218
219
220
221
222
223
224
225
226

227
228
229
230
231
232
233
...
248
249
250
251
252
253
254
255
256
257

258
259
260
261
262
263
264
...
274
275
276
277
278
279
280
281
282
283

284
285
286
287
288
289
290
...
299
300
301
302
303
304
305

306
307
308
309
310
311
312
313
314

315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
...
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372

373
374
375
376
377
378

379
380

381
382
383


384
385
386
387
388

389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405

406
407
408
409
410
411
412
...
433
434
435
436
437
438
439

440
441
442
443
444
445
446
447
448
449
450

451
452
453
454
455
456
457
...
468
469
470
471
472
473
474
475
476
477
478

479
from . import conj
from . import mfsp
from . import phonet


## Verbs













def suggVerb (sFlex, sWho, funcSugg2=None):



    aSugg = set()
    for sStem in stem(sFlex):
        tTags = conj._getTags(sStem)
        if tTags:
            # we get the tense
            aTense = set()
            for sMorph in _dAnalyses.get(sFlex, []): # we don’t check if word exists in _dAnalyses, for it is assumed it has been done before
                for m in re.finditer(">"+sStem+" .*?(:(?:Y|I[pqsf]|S[pq]|K|P))", sMorph):
                    # stem must be used in regex to prevent confusion between different verbs (e.g. sauras has 2 stems: savoir and saurer)
                    if m:
                        if m.group(1) == ":Y":
                            aTense.add(":Ip")
                            aTense.add(":Iq")
                            aTense.add(":Is")
                        elif m.group(1) == ":P":
................................................................................
                if conj._hasConjWithTags(tTags, sTense, sWho):
                    aSugg.add(conj._getConjWithTags(sStem, tTags, sTense, sWho))
    if funcSugg2:
        aSugg2 = funcSugg2(sFlex)
        if aSugg2:
            aSugg.add(aSugg2)
    if aSugg:


        return "|".join(aSugg)
    return ""


def suggVerbPpas (sFlex, sWhat=None):

    aSugg = set()
    for sStem in stem(sFlex):
        tTags = conj._getTags(sStem)
        if tTags:
            if not sWhat:
                aSugg.add(conj._getConjWithTags(sStem, tTags, ":PQ", ":Q1"))
                aSugg.add(conj._getConjWithTags(sStem, tTags, ":PQ", ":Q2"))
                aSugg.add(conj._getConjWithTags(sStem, tTags, ":PQ", ":Q3"))
                aSugg.add(conj._getConjWithTags(sStem, tTags, ":PQ", ":Q4"))
                aSugg.discard("")
            elif sWhat == ":m:s":
                aSugg.add(conj._getConjWithTags(sStem, tTags, ":PQ", ":Q1"))
            elif sWhat == ":m:p":
                if conj._hasConjWithTags(tTags, ":PQ", ":Q2"):
                    aSugg.add(conj._getConjWithTags(sStem, tTags, ":PQ", ":Q2"))
                else:
                    aSugg.add(conj._getConjWithTags(sStem, tTags, ":PQ", ":Q1"))
            elif sWhat == ":f:s":
                if conj._hasConjWithTags(tTags, ":PQ", ":Q3"):
                    aSugg.add(conj._getConjWithTags(sStem, tTags, ":PQ", ":Q3"))
                else:
                    aSugg.add(conj._getConjWithTags(sStem, tTags, ":PQ", ":Q1"))
            elif sWhat == ":f:p":
                if conj._hasConjWithTags(tTags, ":PQ", ":Q4"):
                    aSugg.add(conj._getConjWithTags(sStem, tTags, ":PQ", ":Q4"))
                else:
                    aSugg.add(conj._getConjWithTags(sStem, tTags, ":PQ", ":Q1"))
            elif sWhat == ":s":
                aSugg.add(conj._getConjWithTags(sStem, tTags, ":PQ", ":Q1"))
                aSugg.add(conj._getConjWithTags(sStem, tTags, ":PQ", ":Q3"))
                aSugg.discard("")
            elif sWhat == ":p":
                aSugg.add(conj._getConjWithTags(sStem, tTags, ":PQ", ":Q2"))
                aSugg.add(conj._getConjWithTags(sStem, tTags, ":PQ", ":Q4"))
                aSugg.discard("")
            else:
                aSugg.add(conj._getConjWithTags(sStem, tTags, ":PQ", ":Q1"))
    if aSugg:
        return "|".join(aSugg)
    return ""


def suggVerbTense (sFlex, sTense, sWho):

    aSugg = set()
    for sStem in stem(sFlex):
        if conj.hasConj(sStem, sTense, sWho):
            aSugg.add(conj.getConj(sStem, sTense, sWho))
    if aSugg:
        return "|".join(aSugg)
    return ""


def suggVerbImpe (sFlex):



    aSugg = set()
    for sStem in stem(sFlex):
        tTags = conj._getTags(sStem)
        if tTags:
            if conj._hasConjWithTags(tTags, ":E", ":2s"):
                aSugg.add(conj._getConjWithTags(sStem, tTags, ":E", ":2s"))
            if conj._hasConjWithTags(tTags, ":E", ":1p"):
                aSugg.add(conj._getConjWithTags(sStem, tTags, ":E", ":1p"))
            if conj._hasConjWithTags(tTags, ":E", ":2p"):
                aSugg.add(conj._getConjWithTags(sStem, tTags, ":E", ":2p"))
    if aSugg:


        return "|".join(aSugg)
    return ""


def suggVerbInfi (sFlex):

    return "|".join([ sStem  for sStem in stem(sFlex)  if conj.isVerb(sStem) ])


_dQuiEst = { "je": ":1s", "j’": ":1s", "j’en": ":1s", "j’y": ":1s", \
             "tu": ":2s", "il": ":3s", "on": ":3s", "elle": ":3s", "nous": ":1p", "vous": ":2p", "ils": ":3p", "elles": ":3p" }
_lIndicatif = [":Ip", ":Iq", ":Is", ":If"]
_lSubjonctif = [":Sp", ":Sq"]

def suggVerbMode (sFlex, cMode, sSuj):

    if cMode == ":I":
        lMode = _lIndicatif
    elif cMode == ":S":
        lMode = _lSubjonctif
    elif cMode.startswith((":I", ":S")):
        lMode = [cMode]
    else:
................................................................................
        return ""
    sWho = _dQuiEst.get(sSuj.lower(), None)
    if not sWho:
        if sSuj[0:1].islower(): # pas un pronom, ni un nom propre
            return ""
        sWho = ":3s"
    aSugg = set()
    for sStem in stem(sFlex):
        tTags = conj._getTags(sStem)
        if tTags:
            for sTense in lMode:
                if conj._hasConjWithTags(tTags, sTense, sWho):
                    aSugg.add(conj._getConjWithTags(sStem, tTags, sTense, sWho))
    if aSugg:
        return "|".join(aSugg)
................................................................................


## Nouns and adjectives

def suggPlur (sFlex, sWordToAgree=None):
    "returns plural forms assuming sFlex is singular"
    if sWordToAgree:
        if sWordToAgree not in _dAnalyses and not _storeMorphFromFSA(sWordToAgree):

            return ""
        sGender = cr.getGender(_dAnalyses.get(sWordToAgree, []))
        if sGender == ":m":
            return suggMasPlur(sFlex)
        elif sGender == ":f":
            return suggFemPlur(sFlex)
    aSugg = set()
    if "-" not in sFlex:
        if sFlex.endswith("l"):
................................................................................
    if aSugg:
        return "|".join(aSugg)
    return ""


def suggMasSing (sFlex, bSuggSimil=False):
    "returns masculine singular forms"
    # we don’t check if word exists in _dAnalyses, for it is assumed it has been done before
    aSugg = set()
    for sMorph in _dAnalyses.get(sFlex, []):

        if not ":V" in sMorph:
            # not a verb
            if ":m" in sMorph or ":e" in sMorph:
                aSugg.add(suggSing(sFlex))
            else:
                sStem = cr.getLemmaOfMorph(sMorph)
                if mfsp.isFemForm(sStem):
................................................................................
    if aSugg:
        return "|".join(aSugg)
    return ""


def suggMasPlur (sFlex, bSuggSimil=False):
    "returns masculine plural forms"
    # we don’t check if word exists in _dAnalyses, for it is assumed it has been done before
    aSugg = set()
    for sMorph in _dAnalyses.get(sFlex, []):

        if not ":V" in sMorph:
            # not a verb
            if ":m" in sMorph or ":e" in sMorph:
                aSugg.add(suggPlur(sFlex))
            else:
                sStem = cr.getLemmaOfMorph(sMorph)
                if mfsp.isFemForm(sStem):
................................................................................
    if aSugg:
        return "|".join(aSugg)
    return ""


def suggFemSing (sFlex, bSuggSimil=False):
    "returns feminine singular forms"
    # we don’t check if word exists in _dAnalyses, for it is assumed it has been done before
    aSugg = set()
    for sMorph in _dAnalyses.get(sFlex, []):

        if not ":V" in sMorph:
            # not a verb
            if ":f" in sMorph or ":e" in sMorph:
                aSugg.add(suggSing(sFlex))
            else:
                sStem = cr.getLemmaOfMorph(sMorph)
                if mfsp.isFemForm(sStem):
................................................................................
    if aSugg:
        return "|".join(aSugg)
    return ""


def suggFemPlur (sFlex, bSuggSimil=False):
    "returns feminine plural forms"
    # we don’t check if word exists in _dAnalyses, for it is assumed it has been done before
    aSugg = set()
    for sMorph in _dAnalyses.get(sFlex, []):

        if not ":V" in sMorph:
            # not a verb
            if ":f" in sMorph or ":e" in sMorph:
                aSugg.add(suggPlur(sFlex))
            else:
                sStem = cr.getLemmaOfMorph(sMorph)
                if mfsp.isFemForm(sStem):
................................................................................
            aSugg.add(e)
    if aSugg:
        return "|".join(aSugg)
    return ""


def hasFemForm (sFlex):

    for sStem in stem(sFlex):
        if mfsp.isFemForm(sStem) or conj.hasConj(sStem, ":PQ", ":Q3"):
            return True
    if phonet.hasSimil(sFlex, ":f"):
        return True
    return False


def hasMasForm (sFlex):

    for sStem in stem(sFlex):
        if mfsp.isFemForm(sStem) or conj.hasConj(sStem, ":PQ", ":Q1"):
            # what has a feminine form also has a masculine form
            return True
    if phonet.hasSimil(sFlex, ":m"):
        return True
    return False


def switchGender (sFlex, bPlur=None):
    # we don’t check if word exists in _dAnalyses, for it is assumed it has been done before
    aSugg = set()
    if bPlur == None:
        for sMorph in _dAnalyses.get(sFlex, []):
            if ":f" in sMorph:
                if ":s" in sMorph:
                    aSugg.add(suggMasSing(sFlex))
                elif ":p" in sMorph:
                    aSugg.add(suggMasPlur(sFlex))
            elif ":m" in sMorph:
                if ":s" in sMorph:
................................................................................
                    aSugg.add(suggFemSing(sFlex))
                elif ":p" in sMorph:
                    aSugg.add(suggFemPlur(sFlex))
                else:
                    aSugg.add(suggFemSing(sFlex))
                    aSugg.add(suggFemPlur(sFlex))
    elif bPlur:
        for sMorph in _dAnalyses.get(sFlex, []):
            if ":f" in sMorph:
                aSugg.add(suggMasPlur(sFlex))
            elif ":m" in sMorph:
                aSugg.add(suggFemPlur(sFlex))
    else:
        for sMorph in _dAnalyses.get(sFlex, []):
            if ":f" in sMorph:
                aSugg.add(suggMasSing(sFlex))
            elif ":m" in sMorph:
                aSugg.add(suggFemSing(sFlex))
    if aSugg:
        return "|".join(aSugg)
    return ""


def switchPlural (sFlex):
    # we don’t check if word exists in _dAnalyses, for it is assumed it has been done before
    aSugg = set()
    for sMorph in _dAnalyses.get(sFlex, []):
        if ":s" in sMorph:
            aSugg.add(suggPlur(sFlex))
        elif ":p" in sMorph:
            aSugg.add(suggSing(sFlex))
    if aSugg:
        return "|".join(aSugg)
    return ""


def hasSimil (sWord, sPattern=None):

    return phonet.hasSimil(sWord, sPattern)


def suggSimil (sWord, sPattern=None, bSubst=False):
    "return list of words phonetically similar to sWord and whom POS is matching sPattern"
    # we don’t check if word exists in _dAnalyses, for it is assumed it has been done before

    aSugg = phonet.selectSimil(sWord, sPattern)
    for sMorph in _dAnalyses.get(sWord, []):

        aSugg.update(conj.getSimil(sWord, sMorph, bSubst))
        break
    if aSugg:


        return "|".join(aSugg)
    return ""


def suggCeOrCet (sWord):

    if re.match("(?i)[aeéèêiouyâîï]", sWord):
        return "cet"
    if sWord[0:1] == "h" or sWord[0:1] == "H":
        return "ce|cet"
    return "ce"


def suggLesLa (sWord):
    # we don’t check if word exists in _dAnalyses, for it is assumed it has been done before
    if any( ":p" in sMorph  for sMorph in _dAnalyses.get(sWord, []) ):
        return "les|la"
    return "la"


_zBinary = re.compile("^[01]+$")

def formatNumber (s):

    nLen = len(s)
    if nLen < 4:
        return s
    sRes = ""
    # nombre ordinaire
    nEnd = nLen
    while nEnd > 0:
................................................................................
    elif nLen == 9 and s.startswith("0"):
        sRes += "|" + s[0:3] + " " + s[3:5] + " " + s[5:7] + " " + s[7:9]                   # fixe belge 1
        sRes += "|" + s[0:2] + " " + s[2:5] + " " + s[5:7] + " " + s[7:9]                   # fixe belge 2
    return sRes


def formatNF (s):

    try:
        m = re.match("NF[  -]?(C|E|P|Q|S|X|Z|EN(?:[  -]ISO|))[  -]?([0-9]+(?:[/‑-][0-9]+|))", s)
        if not m:
            return ""
        return "NF " + m.group(1).upper().replace(" ", " ").replace("-", " ") + " " + m.group(2).replace("/", "‑").replace("-", "‑")
    except:
        traceback.print_exc()
        return "# erreur #"


def undoLigature (c):

    if c == "fi":
        return "fi"
    elif c == "fl":
        return "fl"
    elif c == "ff":
        return "ff"
    elif c == "ffi":
................................................................................


_xNormalizedCharsForInclusiveWriting = str.maketrans({
    '(': '_',  ')': '_',
    '.': '_',  '·': '_',
    '–': '_',  '—': '_',
    '/': '_'
 })


def normalizeInclusiveWriting (sToken):

    return sToken.translate(_xNormalizedCharsForInclusiveWriting)







>
>
>
>
>
>
>
>
>
>
>
>
|
>
>
>

|




|
|







 







>
>




|
>

|


|





|

|




|




|




|



|











>

|







|
>
>
>

|









>
>





>
|








>







 







|







 







|
>

|







 







<

<
>







 







<

<
>







 







<

<
>







 







<

<
>







 







>
|








>
|









|


|







 







|





|










|

|










>



|

|
>

<
>



>
>





>








|
|







>







 







>











>







 







|



>

3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
..
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
...
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
...
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
...
216
217
218
219
220
221
222

223

224
225
226
227
228
229
230
231
...
243
244
245
246
247
248
249

250

251
252
253
254
255
256
257
258
...
273
274
275
276
277
278
279

280

281
282
283
284
285
286
287
288
...
298
299
300
301
302
303
304

305

306
307
308
309
310
311
312
313
...
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
...
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406

407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
...
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
...
501
502
503
504
505
506
507
508
509
510
511
512
513
from . import conj
from . import mfsp
from . import phonet


## Verbs

def splitVerb (sVerb):
    "renvoie le verbe et les pronoms séparément"
    iRight = sVerb.rfind("-")
    sSuffix = sVerb[iRight:]
    sVerb = sVerb[:iRight]
    if sVerb.endswith(("-t", "-le", "-la", "-les")):
        iRight = sVerb.rfind("-")
        sSuffix = sVerb[iRight:] + sSuffix
        sVerb = sVerb[:iRight]
    return sVerb, sSuffix


def suggVerb (sFlex, sWho, funcSugg2=None, bVC=False):
    "change <sFlex> conjugation according to <sWho>"
    if bVC:
        sFlex, sSfx = splitVerb(sFlex)
    aSugg = set()
    for sStem in _oSpellChecker.getLemma(sFlex):
        tTags = conj._getTags(sStem)
        if tTags:
            # we get the tense
            aTense = set()
            for sMorph in _oSpellChecker.getMorph(sFlex):
                for m in re.finditer(">"+sStem+"/.*?(:(?:Y|I[pqsf]|S[pq]|K|P))", sMorph):
                    # stem must be used in regex to prevent confusion between different verbs (e.g. sauras has 2 stems: savoir and saurer)
                    if m:
                        if m.group(1) == ":Y":
                            aTense.add(":Ip")
                            aTense.add(":Iq")
                            aTense.add(":Is")
                        elif m.group(1) == ":P":
................................................................................
                if conj._hasConjWithTags(tTags, sTense, sWho):
                    aSugg.add(conj._getConjWithTags(sStem, tTags, sTense, sWho))
    if funcSugg2:
        aSugg2 = funcSugg2(sFlex)
        if aSugg2:
            aSugg.add(aSugg2)
    if aSugg:
        if bVC:
            aSugg = list(map(lambda sSug: sSug + sSfx, aSugg))
        return "|".join(aSugg)
    return ""


def suggVerbPpas (sFlex, sPattern=None):
    "suggest past participles for <sFlex>"
    aSugg = set()
    for sStem in _oSpellChecker.getLemma(sFlex):
        tTags = conj._getTags(sStem)
        if tTags:
            if not sPattern:
                aSugg.add(conj._getConjWithTags(sStem, tTags, ":PQ", ":Q1"))
                aSugg.add(conj._getConjWithTags(sStem, tTags, ":PQ", ":Q2"))
                aSugg.add(conj._getConjWithTags(sStem, tTags, ":PQ", ":Q3"))
                aSugg.add(conj._getConjWithTags(sStem, tTags, ":PQ", ":Q4"))
                aSugg.discard("")
            elif sPattern == ":m:s":
                aSugg.add(conj._getConjWithTags(sStem, tTags, ":PQ", ":Q1"))
            elif sPattern == ":m:p":
                if conj._hasConjWithTags(tTags, ":PQ", ":Q2"):
                    aSugg.add(conj._getConjWithTags(sStem, tTags, ":PQ", ":Q2"))
                else:
                    aSugg.add(conj._getConjWithTags(sStem, tTags, ":PQ", ":Q1"))
            elif sPattern == ":f:s":
                if conj._hasConjWithTags(tTags, ":PQ", ":Q3"):
                    aSugg.add(conj._getConjWithTags(sStem, tTags, ":PQ", ":Q3"))
                else:
                    aSugg.add(conj._getConjWithTags(sStem, tTags, ":PQ", ":Q1"))
            elif sPattern == ":f:p":
                if conj._hasConjWithTags(tTags, ":PQ", ":Q4"):
                    aSugg.add(conj._getConjWithTags(sStem, tTags, ":PQ", ":Q4"))
                else:
                    aSugg.add(conj._getConjWithTags(sStem, tTags, ":PQ", ":Q1"))
            elif sPattern == ":s":
                aSugg.add(conj._getConjWithTags(sStem, tTags, ":PQ", ":Q1"))
                aSugg.add(conj._getConjWithTags(sStem, tTags, ":PQ", ":Q3"))
                aSugg.discard("")
            elif sPattern == ":p":
                aSugg.add(conj._getConjWithTags(sStem, tTags, ":PQ", ":Q2"))
                aSugg.add(conj._getConjWithTags(sStem, tTags, ":PQ", ":Q4"))
                aSugg.discard("")
            else:
                aSugg.add(conj._getConjWithTags(sStem, tTags, ":PQ", ":Q1"))
    if aSugg:
        return "|".join(aSugg)
    return ""


def suggVerbTense (sFlex, sTense, sWho):
    "change <sFlex> to a verb according to <sTense> and <sWho>"
    aSugg = set()
    for sStem in _oSpellChecker.getLemma(sFlex):
        if conj.hasConj(sStem, sTense, sWho):
            aSugg.add(conj.getConj(sStem, sTense, sWho))
    if aSugg:
        return "|".join(aSugg)
    return ""


def suggVerbImpe (sFlex, bVC=False):
    "change <sFlex> to a verb at imperative form"
    if bVC:
        sFlex, sSfx = splitVerb(sFlex)
    aSugg = set()
    for sStem in _oSpellChecker.getLemma(sFlex):
        tTags = conj._getTags(sStem)
        if tTags:
            if conj._hasConjWithTags(tTags, ":E", ":2s"):
                aSugg.add(conj._getConjWithTags(sStem, tTags, ":E", ":2s"))
            if conj._hasConjWithTags(tTags, ":E", ":1p"):
                aSugg.add(conj._getConjWithTags(sStem, tTags, ":E", ":1p"))
            if conj._hasConjWithTags(tTags, ":E", ":2p"):
                aSugg.add(conj._getConjWithTags(sStem, tTags, ":E", ":2p"))
    if aSugg:
        if bVC:
            aSugg = list(map(lambda sSug: sSug + sSfx, aSugg))
        return "|".join(aSugg)
    return ""


def suggVerbInfi (sFlex):
    "returns infinitive forms of <sFlex>"
    return "|".join([ sStem  for sStem in _oSpellChecker.getLemma(sFlex)  if conj.isVerb(sStem) ])


_dQuiEst = { "je": ":1s", "j’": ":1s", "j’en": ":1s", "j’y": ":1s", \
             "tu": ":2s", "il": ":3s", "on": ":3s", "elle": ":3s", "nous": ":1p", "vous": ":2p", "ils": ":3p", "elles": ":3p" }
_lIndicatif = [":Ip", ":Iq", ":Is", ":If"]
_lSubjonctif = [":Sp", ":Sq"]

def suggVerbMode (sFlex, cMode, sSuj):
    "returns other conjugations of <sFlex> acconding to <cMode> and <sSuj>"
    if cMode == ":I":
        lMode = _lIndicatif
    elif cMode == ":S":
        lMode = _lSubjonctif
    elif cMode.startswith((":I", ":S")):
        lMode = [cMode]
    else:
................................................................................
        return ""
    sWho = _dQuiEst.get(sSuj.lower(), None)
    if not sWho:
        if sSuj[0:1].islower(): # pas un pronom, ni un nom propre
            return ""
        sWho = ":3s"
    aSugg = set()
    for sStem in _oSpellChecker.getLemma(sFlex):
        tTags = conj._getTags(sStem)
        if tTags:
            for sTense in lMode:
                if conj._hasConjWithTags(tTags, sTense, sWho):
                    aSugg.add(conj._getConjWithTags(sStem, tTags, sTense, sWho))
    if aSugg:
        return "|".join(aSugg)
................................................................................


## Nouns and adjectives

def suggPlur (sFlex, sWordToAgree=None):
    "returns plural forms assuming sFlex is singular"
    if sWordToAgree:
        lMorph = _oSpellChecker.getMorph(sFlex)
        if not lMorph:
            return ""
        sGender = cr.getGender(lMorph)
        if sGender == ":m":
            return suggMasPlur(sFlex)
        elif sGender == ":f":
            return suggFemPlur(sFlex)
    aSugg = set()
    if "-" not in sFlex:
        if sFlex.endswith("l"):
................................................................................
    if aSugg:
        return "|".join(aSugg)
    return ""


def suggMasSing (sFlex, bSuggSimil=False):
    "returns masculine singular forms"

    aSugg = set()

    for sMorph in _oSpellChecker.getMorph(sFlex):
        if not ":V" in sMorph:
            # not a verb
            if ":m" in sMorph or ":e" in sMorph:
                aSugg.add(suggSing(sFlex))
            else:
                sStem = cr.getLemmaOfMorph(sMorph)
                if mfsp.isFemForm(sStem):
................................................................................
    if aSugg:
        return "|".join(aSugg)
    return ""


def suggMasPlur (sFlex, bSuggSimil=False):
    "returns masculine plural forms"

    aSugg = set()

    for sMorph in _oSpellChecker.getMorph(sFlex):
        if not ":V" in sMorph:
            # not a verb
            if ":m" in sMorph or ":e" in sMorph:
                aSugg.add(suggPlur(sFlex))
            else:
                sStem = cr.getLemmaOfMorph(sMorph)
                if mfsp.isFemForm(sStem):
................................................................................
    if aSugg:
        return "|".join(aSugg)
    return ""


def suggFemSing (sFlex, bSuggSimil=False):
    "returns feminine singular forms"

    aSugg = set()

    for sMorph in _oSpellChecker.getMorph(sFlex):
        if not ":V" in sMorph:
            # not a verb
            if ":f" in sMorph or ":e" in sMorph:
                aSugg.add(suggSing(sFlex))
            else:
                sStem = cr.getLemmaOfMorph(sMorph)
                if mfsp.isFemForm(sStem):
................................................................................
    if aSugg:
        return "|".join(aSugg)
    return ""


def suggFemPlur (sFlex, bSuggSimil=False):
    "returns feminine plural forms"

    aSugg = set()

    for sMorph in _oSpellChecker.getMorph(sFlex):
        if not ":V" in sMorph:
            # not a verb
            if ":f" in sMorph or ":e" in sMorph:
                aSugg.add(suggPlur(sFlex))
            else:
                sStem = cr.getLemmaOfMorph(sMorph)
                if mfsp.isFemForm(sStem):
................................................................................
            aSugg.add(e)
    if aSugg:
        return "|".join(aSugg)
    return ""


def hasFemForm (sFlex):
    "return True if there is a feminine form of <sFlex>"
    for sStem in _oSpellChecker.getLemma(sFlex):
        if mfsp.isFemForm(sStem) or conj.hasConj(sStem, ":PQ", ":Q3"):
            return True
    if phonet.hasSimil(sFlex, ":f"):
        return True
    return False


def hasMasForm (sFlex):
    "return True if there is a masculine form of <sFlex>"
    for sStem in _oSpellChecker.getLemma(sFlex):
        if mfsp.isFemForm(sStem) or conj.hasConj(sStem, ":PQ", ":Q1"):
            # what has a feminine form also has a masculine form
            return True
    if phonet.hasSimil(sFlex, ":m"):
        return True
    return False


def switchGender (sFlex, bPlur=None):
    "return feminine or masculine form(s) of <sFlex>"
    aSugg = set()
    if bPlur == None:
        for sMorph in _oSpellChecker.getMorph(sFlex):
            if ":f" in sMorph:
                if ":s" in sMorph:
                    aSugg.add(suggMasSing(sFlex))
                elif ":p" in sMorph:
                    aSugg.add(suggMasPlur(sFlex))
            elif ":m" in sMorph:
                if ":s" in sMorph:
................................................................................
                    aSugg.add(suggFemSing(sFlex))
                elif ":p" in sMorph:
                    aSugg.add(suggFemPlur(sFlex))
                else:
                    aSugg.add(suggFemSing(sFlex))
                    aSugg.add(suggFemPlur(sFlex))
    elif bPlur:
        for sMorph in _oSpellChecker.getMorph(sFlex):
            if ":f" in sMorph:
                aSugg.add(suggMasPlur(sFlex))
            elif ":m" in sMorph:
                aSugg.add(suggFemPlur(sFlex))
    else:
        for sMorph in _oSpellChecker.getMorph(sFlex):
            if ":f" in sMorph:
                aSugg.add(suggMasSing(sFlex))
            elif ":m" in sMorph:
                aSugg.add(suggFemSing(sFlex))
    if aSugg:
        return "|".join(aSugg)
    return ""


def switchPlural (sFlex):
    "return plural or singular form(s) of <sFlex>"
    aSugg = set()
    for sMorph in _oSpellChecker.getMorph(sFlex):
        if ":s" in sMorph:
            aSugg.add(suggPlur(sFlex))
        elif ":p" in sMorph:
            aSugg.add(suggSing(sFlex))
    if aSugg:
        return "|".join(aSugg)
    return ""


def hasSimil (sWord, sPattern=None):
    "return True if there is words phonetically similar to <sWord> (according to <sPattern> if required)"
    return phonet.hasSimil(sWord, sPattern)


def suggSimil (sWord, sPattern=None, bSubst=False, bVC=False):
    "return list of words phonetically similar to sWord and whom POS is matching sPattern"
    if bVC:
        sWord, sSfx = splitVerb(sWord)
    aSugg = phonet.selectSimil(sWord, sPattern)

    for sMorph in _oSpellChecker.getMorph(sWord):
        aSugg.update(conj.getSimil(sWord, sMorph, bSubst))
        break
    if aSugg:
        if bVC:
            aSugg = list(map(lambda sSug: sSug + sSfx, aSugg))
        return "|".join(aSugg)
    return ""


def suggCeOrCet (sWord):
    "suggest “ce” or “cet” or both according to the first letter of <sWord>"
    if re.match("(?i)[aeéèêiouyâîï]", sWord):
        return "cet"
    if sWord[0:1] == "h" or sWord[0:1] == "H":
        return "ce|cet"
    return "ce"


def suggLesLa (sWord):
    "suggest “les” or “la” according to <sWord>"
    if any( ":p" in sMorph  for sMorph in _oSpellChecker.getMorph(sWord) ):
        return "les|la"
    return "la"


_zBinary = re.compile("^[01]+$")

def formatNumber (s):
    "add spaces or hyphens to big numbers"
    nLen = len(s)
    if nLen < 4:
        return s
    sRes = ""
    # nombre ordinaire
    nEnd = nLen
    while nEnd > 0:
................................................................................
    elif nLen == 9 and s.startswith("0"):
        sRes += "|" + s[0:3] + " " + s[3:5] + " " + s[5:7] + " " + s[7:9]                   # fixe belge 1
        sRes += "|" + s[0:2] + " " + s[2:5] + " " + s[5:7] + " " + s[7:9]                   # fixe belge 2
    return sRes


def formatNF (s):
    "typography: format NF reference (norme française)"
    try:
        m = re.match("NF[  -]?(C|E|P|Q|S|X|Z|EN(?:[  -]ISO|))[  -]?([0-9]+(?:[/‑-][0-9]+|))", s)
        if not m:
            return ""
        return "NF " + m.group(1).upper().replace(" ", " ").replace("-", " ") + " " + m.group(2).replace("/", "‑").replace("-", "‑")
    except:
        traceback.print_exc()
        return "# erreur #"


def undoLigature (c):
    "typography: split ligature character <c> in several chars"
    if c == "fi":
        return "fi"
    elif c == "fl":
        return "fl"
    elif c == "ff":
        return "ff"
    elif c == "ffi":
................................................................................


_xNormalizedCharsForInclusiveWriting = str.maketrans({
    '(': '_',  ')': '_',
    '.': '_',  '·': '_',
    '–': '_',  '—': '_',
    '/': '_'
})


def normalizeInclusiveWriting (sToken):
    "typography: replace word separators used in inclusive writing by underscore (_)"
    return sToken.translate(_xNormalizedCharsForInclusiveWriting)

Modified gc_lang/fr/modules/lexicographe.py from [5e53113f51] to [175c38852d].


1


2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
..
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
...
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
...
151
152
153
154
155
156
157

158
159
160
161
162
163
164
165

166
167
168
169
170
171
172
...
190
191
192
193
194
195
196
197
198
199
200
201
202
203

204
205
206
207
208
209
210

# Grammalecte - Lexicographe


# License: MPL 2


import re
import traceback


_dTAGS = {  
    ':N': (" nom,", "Nom"),
    ':A': (" adjectif,", "Adjectif"),
    ':M1': (" prénom,", "Prénom"),
    ':M2': (" patronyme,", "Patronyme, matronyme, nom de famille…"),
    ':MP': (" nom propre,", "Nom propre"),
    ':W': (" adverbe,", "Adverbe"),
    ':J': (" interjection,", "Interjection"),
................................................................................
    ':O2': (" 2ᵉ pers.,", "Pronom : 2ᵉ personne"),
    ':O3': (" 3ᵉ pers.,", "Pronom : 3ᵉ personne"),
    ':C': (" conjonction,", "Conjonction"),
    ':Ĉ': (" conjonction (él.),", "Conjonction (élément)"),
    ':Cc': (" conjonction de coordination,", "Conjonction de coordination"),
    ':Cs': (" conjonction de subordination,", "Conjonction de subordination"),
    ':Ĉs': (" conjonction de subordination (él.),", "Conjonction de subordination (élément)"),
    
    ':Ñ': (" locution nominale (él.),", "Locution nominale (élément)"),
    ':Â': (" locution adjectivale (él.),", "Locution adjectivale (élément)"),
    ':Ṽ': (" locution verbale (él.),", "Locution verbale (élément)"),
    ':Ŵ': (" locution adverbiale (él.),", "Locution adverbiale (élément)"),
    ':Ŕ': (" locution prépositive (él.),", "Locution prépositive (élément)"),
    ':Ĵ': (" locution interjective (él.),", "Locution interjective (élément)"),

................................................................................
    'il': " pronom personnel sujet, 3ᵉ pers. masc. sing.",
    'on': " pronom personnel sujet, 3ᵉ pers. sing. ou plur.",
    'elle': " pronom personnel sujet, 3ᵉ pers. fém. sing.",
    'nous': " pronom personnel sujet/objet, 1ʳᵉ pers. plur.",
    'vous': " pronom personnel sujet/objet, 2ᵉ pers. plur.",
    'ils': " pronom personnel sujet, 3ᵉ pers. masc. plur.",
    'elles': " pronom personnel sujet, 3ᵉ pers. masc. plur.",
    
    "là": " particule démonstrative",
    "ci": " particule démonstrative",
    
    'le': " COD, masc. sing.",
    'la': " COD, fém. sing.",
    'les': " COD, plur.",
        
    'moi': " COI (à moi), sing.",
    'toi': " COI (à toi), sing.",
    'lui': " COI (à lui ou à elle), sing.",
    'nous2': " COI (à nous), plur.",
    'vous2': " COI (à vous), plur.",
    'leur': " COI (à eux ou à elles), plur.",

................................................................................
    "m'en": " (me) pronom personnel objet + (en) pronom adverbial",
    "t'en": " (te) pronom personnel objet + (en) pronom adverbial",
    "s'en": " (se) pronom personnel objet + (en) pronom adverbial",
}


class Lexicographe:


    def __init__ (self, oSpellChecker):
        self.oSpellChecker = oSpellChecker
        self._zElidedPrefix = re.compile("(?i)^([dljmtsncç]|quoiqu|lorsqu|jusqu|puisqu|qu)['’](.+)")
        self._zCompoundWord = re.compile("(?i)(\\w+)-((?:les?|la)-(?:moi|toi|lui|[nv]ous|leur)|t-(?:il|elle|on)|y|en|[mts][’'](?:y|en)|les?|l[aà]|[mt]oi|leur|lui|je|tu|ils?|elles?|on|[nv]ous)$")
        self._zTag = re.compile("[:;/][\\w*][^:;/]*")

    def analyzeWord (self, sWord):

        try:
            if not sWord:
                return (None, None)
            if sWord.count("-") > 4:
                return (["élément complexe indéterminé"], None)
            if sWord.isdigit():
                return (["nombre"], None)
................................................................................
                aMorph.append( "{} : {}".format(sWord, self.formatTags(lMorph[0])) )
            else:
                aMorph.append( "{} :  inconnu du dictionnaire".format(sWord) )
            # suffixe d’un mot composé
            if m2:
                aMorph.append( "-{} : {}".format(m2.group(2), self._formatSuffix(m2.group(2).lower())) )
            # Verbes
            aVerb = set([ s[1:s.find(" ")]  for s in lMorph  if ":V" in s ])
            return (aMorph, aVerb)
        except:
            traceback.print_exc()
            return (["#erreur"], None)

    def formatTags (self, sTags):

        sRes = ""
        sTags = re.sub("(?<=V[1-3])[itpqnmr_eaxz]+", "", sTags)
        sTags = re.sub("(?<=V0[ea])[itpqnmr_eaxz]+", "", sTags)
        for m in self._zTag.finditer(sTags):
            sRes += _dTAGS.get(m.group(0), " [{}]".format(m.group(0)))[0]
        if sRes.startswith(" verbe") and not sRes.endswith("infinitif"):
            sRes += " [{}]".format(sTags[1:sTags.find(" ")])
>
|
>
>







|







 







|







 







|


|



|







 







>








>







 







|






>







1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
..
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
...
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
...
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
...
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
"""
Grammalecte - Lexicographe
"""

# License: MPL 2


import re
import traceback


_dTAGS = {
    ':N': (" nom,", "Nom"),
    ':A': (" adjectif,", "Adjectif"),
    ':M1': (" prénom,", "Prénom"),
    ':M2': (" patronyme,", "Patronyme, matronyme, nom de famille…"),
    ':MP': (" nom propre,", "Nom propre"),
    ':W': (" adverbe,", "Adverbe"),
    ':J': (" interjection,", "Interjection"),
................................................................................
    ':O2': (" 2ᵉ pers.,", "Pronom : 2ᵉ personne"),
    ':O3': (" 3ᵉ pers.,", "Pronom : 3ᵉ personne"),
    ':C': (" conjonction,", "Conjonction"),
    ':Ĉ': (" conjonction (él.),", "Conjonction (élément)"),
    ':Cc': (" conjonction de coordination,", "Conjonction de coordination"),
    ':Cs': (" conjonction de subordination,", "Conjonction de subordination"),
    ':Ĉs': (" conjonction de subordination (él.),", "Conjonction de subordination (élément)"),

    ':Ñ': (" locution nominale (él.),", "Locution nominale (élément)"),
    ':Â': (" locution adjectivale (él.),", "Locution adjectivale (élément)"),
    ':Ṽ': (" locution verbale (él.),", "Locution verbale (élément)"),
    ':Ŵ': (" locution adverbiale (él.),", "Locution adverbiale (élément)"),
    ':Ŕ': (" locution prépositive (él.),", "Locution prépositive (élément)"),
    ':Ĵ': (" locution interjective (él.),", "Locution interjective (élément)"),

................................................................................
    'il': " pronom personnel sujet, 3ᵉ pers. masc. sing.",
    'on': " pronom personnel sujet, 3ᵉ pers. sing. ou plur.",
    'elle': " pronom personnel sujet, 3ᵉ pers. fém. sing.",
    'nous': " pronom personnel sujet/objet, 1ʳᵉ pers. plur.",
    'vous': " pronom personnel sujet/objet, 2ᵉ pers. plur.",
    'ils': " pronom personnel sujet, 3ᵉ pers. masc. plur.",
    'elles': " pronom personnel sujet, 3ᵉ pers. masc. plur.",

    "là": " particule démonstrative",
    "ci": " particule démonstrative",

    'le': " COD, masc. sing.",
    'la': " COD, fém. sing.",
    'les': " COD, plur.",

    'moi': " COI (à moi), sing.",
    'toi': " COI (à toi), sing.",
    'lui': " COI (à lui ou à elle), sing.",
    'nous2': " COI (à nous), plur.",
    'vous2': " COI (à vous), plur.",
    'leur': " COI (à eux ou à elles), plur.",

................................................................................
    "m'en": " (me) pronom personnel objet + (en) pronom adverbial",
    "t'en": " (te) pronom personnel objet + (en) pronom adverbial",
    "s'en": " (se) pronom personnel objet + (en) pronom adverbial",
}


class Lexicographe:
    "Lexicographer - word analyzer"

    def __init__ (self, oSpellChecker):
        self.oSpellChecker = oSpellChecker
        self._zElidedPrefix = re.compile("(?i)^([dljmtsncç]|quoiqu|lorsqu|jusqu|puisqu|qu)['’](.+)")
        self._zCompoundWord = re.compile("(?i)(\\w+)-((?:les?|la)-(?:moi|toi|lui|[nv]ous|leur)|t-(?:il|elle|on)|y|en|[mts][’'](?:y|en)|les?|l[aà]|[mt]oi|leur|lui|je|tu|ils?|elles?|on|[nv]ous)$")
        self._zTag = re.compile("[:;/][\\w*][^:;/]*")

    def analyzeWord (self, sWord):
        "returns a tuple (a list of morphologies, a set of verb at infinitive form)"
        try:
            if not sWord:
                return (None, None)
            if sWord.count("-") > 4:
                return (["élément complexe indéterminé"], None)
            if sWord.isdigit():
                return (["nombre"], None)
................................................................................
                aMorph.append( "{} : {}".format(sWord, self.formatTags(lMorph[0])) )
            else:
                aMorph.append( "{} :  inconnu du dictionnaire".format(sWord) )
            # suffixe d’un mot composé
            if m2:
                aMorph.append( "-{} : {}".format(m2.group(2), self._formatSuffix(m2.group(2).lower())) )
            # Verbes
            aVerb = set([ s[1:s.find("/")]  for s in lMorph  if ":V" in s ])
            return (aMorph, aVerb)
        except:
            traceback.print_exc()
            return (["#erreur"], None)

    def formatTags (self, sTags):
        "returns string: readable tags"
        sRes = ""
        sTags = re.sub("(?<=V[1-3])[itpqnmr_eaxz]+", "", sTags)
        sTags = re.sub("(?<=V0[ea])[itpqnmr_eaxz]+", "", sTags)
        for m in self._zTag.finditer(sTags):
            sRes += _dTAGS.get(m.group(0), " [{}]".format(m.group(0)))[0]
        if sRes.startswith(" verbe") and not sRes.endswith("infinitif"):
            sRes += " [{}]".format(sTags[1:sTags.find(" ")])

Modified gc_lang/fr/modules/mfsp.py from [3f4814b5d6] to [8b7759e076].


1

2
3
4
5
6
7
8

# Masculins, féminins, singuliers et pluriels


from .mfsp_data import lTagMiscPlur as _lTagMiscPlur
from .mfsp_data import lTagMasForm as _lTagMasForm
from .mfsp_data import dMiscPlur as _dMiscPlur
from .mfsp_data import dMasForm as _dMasForm


>
|
>







1
2
3
4
5
6
7
8
9
10
"""
Masculins, féminins, singuliers et pluriels
"""

from .mfsp_data import lTagMiscPlur as _lTagMiscPlur
from .mfsp_data import lTagMasForm as _lTagMasForm
from .mfsp_data import dMiscPlur as _dMiscPlur
from .mfsp_data import dMasForm as _dMasForm


Modified gc_lang/fr/modules/phonet.py from [cc107e0763] to [df9f884192].


1


2
3
4
5
6
7
8

# Grammalecte - Suggestion phonétique


# License: GPL 3

import re

from .phonet_data import dWord as _dWord
from .phonet_data import lSet as _lSet
from .phonet_data import dMorph as _dMorph
>
|
>
>







1
2
3
4
5
6
7
8
9
10
11
"""
Grammalecte - Suggestion phonétique
"""

# License: GPL 3

import re

from .phonet_data import dWord as _dWord
from .phonet_data import lSet as _lSet
from .phonet_data import dMorph as _dMorph

Modified gc_lang/fr/modules/tests.py from [2e6f413e05] to [7a6a733de8].

1
2




3
4
5
6
7
8
9
...
125
126
127
128
129
130
131

132
133
134
135
136
137
138
...
143
144
145
146
147
148
149
150

151
152
153
154
155

156


157






158
159
160
161
162
163
164
165
166
167
168
169
#! python3
# coding: UTF-8





import unittest
import os
import re
import time


................................................................................
        cls._zError = re.compile(r"\{\{.*?\}\}")
        cls._aRuleTested = set()

    def test_parse (self):
        zOption = re.compile("^__([a-zA-Z0-9]+)__ ")
        spHere, spfThisFile = os.path.split(__file__)
        with open(os.path.join(spHere, "gc_test.txt"), "r", encoding="utf-8") as hSrc:

            for sLine in ( s for s in hSrc if not s.startswith("#") and s.strip() ):
                sLineNum = sLine[:10].strip()
                sLine = sLine[10:].strip()
                sOption = None
                m = zOption.search(sLine)
                if m:
                    sLine = sLine[m.end():]
................................................................................
                        sExceptedSuggs = sExceptedSuggs[1:-1]
                else:
                    sErrorText = sLine.strip()
                    sExceptedSuggs = ""
                sExpectedErrors = self._getExpectedErrors(sErrorText)
                sTextToCheck = sErrorText.replace("}}", "").replace("{{", "")
                sFoundErrors, sListErr, sFoundSuggs = self._getFoundErrors(sTextToCheck, sOption)
                self.assertEqual(sExpectedErrors, sFoundErrors, \

                                 "\n# Line num: " + sLineNum + \
                                 "\n> to check: " + _fuckBackslashUTF8(sTextToCheck) + \
                                 "\n  expected: " + sExpectedErrors + \
                                 "\n  found:    " + sFoundErrors + \
                                 "\n  errors:   \n" + sListErr)

                if sExceptedSuggs:


                    self.assertEqual(sExceptedSuggs, sFoundSuggs, "\n# Line num: " + sLineNum + "\n> to check: " + _fuckBackslashUTF8(sTextToCheck) + "\n  errors:   \n" + sListErr)






        # untested rules
        i = 0
        for sOpt, sLineId, sRuleId in gce.listRules():
            if sLineId not in self._aRuleTested and not re.search("^[0-9]+[sp]$|^[pd]_", sRuleId):
                echo(sRuleId, end= ", ")
                i += 1
        if i:
            echo("\n[{} untested rules]".format(i))

    def _splitTestLine (self, sLine):
        sText, sSugg = sLine.split("->>")
        return (sText.strip(), sSugg.strip())

<
>
>
>
>







 







>







 







|
>
|
|
|
|
|
>
|
>
>
|
>
>
>
>
>
>



|
|







1

2
3
4
5
6
7
8
9
10
11
12
...
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
...
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
#! python3


"""
Grammar checker tests for French language
"""

import unittest
import os
import re
import time


................................................................................
        cls._zError = re.compile(r"\{\{.*?\}\}")
        cls._aRuleTested = set()

    def test_parse (self):
        zOption = re.compile("^__([a-zA-Z0-9]+)__ ")
        spHere, spfThisFile = os.path.split(__file__)
        with open(os.path.join(spHere, "gc_test.txt"), "r", encoding="utf-8") as hSrc:
            nError = 0
            for sLine in ( s for s in hSrc if not s.startswith("#") and s.strip() ):
                sLineNum = sLine[:10].strip()
                sLine = sLine[10:].strip()
                sOption = None
                m = zOption.search(sLine)
                if m:
                    sLine = sLine[m.end():]
................................................................................
                        sExceptedSuggs = sExceptedSuggs[1:-1]
                else:
                    sErrorText = sLine.strip()
                    sExceptedSuggs = ""
                sExpectedErrors = self._getExpectedErrors(sErrorText)
                sTextToCheck = sErrorText.replace("}}", "").replace("{{", "")
                sFoundErrors, sListErr, sFoundSuggs = self._getFoundErrors(sTextToCheck, sOption)
                # tests
                if sExpectedErrors != sFoundErrors:
                    print("\n# Line num: " + sLineNum + \
                          "\n> to check: " + _fuckBackslashUTF8(sTextToCheck) + \
                          "\n  expected: " + sExpectedErrors + \
                          "\n  found:    " + sFoundErrors + \
                          "\n  errors:   \n" + sListErr)
                    nError += 1
                elif sExceptedSuggs:
                    if sExceptedSuggs != sFoundSuggs:
                        print("\n# Line num: " + sLineNum + \
                              "\n> to check: " + _fuckBackslashUTF8(sTextToCheck) + \
                              "\n  expected: " + sExceptedSuggs + \
                              "\n  found:    " + sFoundSuggs + \
                              "\n  errors:   \n" + sListErr)
                        nError += 1
            if nError:
                print("Unexpected errors:", nError)
        # untested rules
        i = 0
        for sOpt, sLineId, sRuleId in gce.listRules():
            if sOpt != "@@@@" and sLineId not in self._aRuleTested and not re.search("^[0-9]+[sp]$|^[pd]_", sRuleId):
                echo(sLineId + "/" + sRuleId, end= ", ")
                i += 1
        if i:
            echo("\n[{} untested rules]".format(i))

    def _splitTestLine (self, sLine):
        sText, sSugg = sLine.split("->>")
        return (sText.strip(), sSugg.strip())

Modified gc_lang/fr/modules/textformatter.py from [8fb9ec33bf] to [219d3111da].

1




2
3
4
5
6
7
8
..
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
..
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
#!python3





import re


dReplTable = {
    # surnumerary_spaces
    "start_of_paragraph":          [("^[  ]+", "")],
................................................................................
    # common
    "nbsp_titles":                 [("\\bM(mes?|ᵐᵉˢ?|grs?|ᵍʳˢ?|lles?|ˡˡᵉˢ?|rs?|ʳˢ?|M\\.) ", "M\\1 "),
                                    ("\\bP(re?s?|ʳᵉ?ˢ?) ", "P\\1 "),
                                    ("\\bD(re?s?|ʳᵉ?ˢ?) ", "D\\1 "),
                                    ("\\bV(ves?|ᵛᵉˢ?) ", "V\\1 ")],
    "nbsp_before_symbol":          [("(\\d) ?([%‰€$£¥˚Ω℃])", "\\1 \\2")],
    "nbsp_before_units":           [("(?<=[0-9⁰¹²³⁴⁵⁶⁷⁸⁹]) ?([kcmµn]?(?:[slgJKΩ]|m[²³]?|Wh?|Hz|dB)|[%‰]|°C)\\b", " \\1")],
    "nbsp_repair":                 [("(?<=[[(])[   ]([!?:;])", "\\1"),
                                    ("(https?|ftp)[   ]:(?=//)", "\\1:"),
                                    ("&([a-z]+)[   ];", "&\\1;"),
                                    ("&#([0-9]+|x[0-9a-fA-F]+)[   ];", "&#\\1;")],
    ## missing spaces
    "add_space_after_punctuation": [("([;!…])(?=\\w)", "\\1 "),
                                    ("[?](?=[A-ZÉÈÊÂÀÎ])", "? "),
                                    ("\\.(?=[A-ZÉÈÎ][a-zA-ZàâÂéÉèÈêÊîÎïÏôÔöÖûÛüÜùÙ])", ". "),
................................................................................
    "erase_non_breaking_hyphens":  [("­", "")],
    ## typographic signs
    "ts_apostrophe":          [ ("(?i)\\b([ldnjmtscç])['´‘′`](?=\\w)", "\\1’"),
                                ("(?i)(qu|jusqu|lorsqu|puisqu|quoiqu|quelqu|presqu|entr|aujourd|prud)['´‘′`]", "\\1’") ],
    "ts_ellipsis":            [ ("\\.\\.\\.", "…"),
                                ("(?<=…)[.][.]", "…"),
                                ("…[.](?![.])", "…") ],
    "ts_n_dash_middle":       [ (" [-—] ", " – "), 
                                (" [-—],", " –,") ],
    "ts_m_dash_middle":       [ (" [-–] ", " — "),
                                (" [-–],", " —,") ],
    "ts_n_dash_start":        [ ("^[-—][  ]", "– "),
                                ("^– ", "– "),
                                ("^[-–—](?=[\\w.…])", "– ") ],
    "ts_m_dash_start":        [ ("^[-–][  ]", "— "),
                                ("^— ", "— "),
                                ("^«[  ][—–-][  ]", "« — "),
                                ("^[-–—](?=[\\w.…])", "— ") ],
    "ts_quotation_marks":     [ (u'"(\\w+)"', "“$1”"),
                                ("''(\\w+)''", "“$1”"),
                                ("'(\\w+)'", "“$1”"),
                                ("^(?:\"|'')(?=\\w)", "« "),
                                (" (?:\"|'')(?=\\w)", " « "),
                                ("\\((?:\"|'')(?=\\w)", "(« "),
                                ("(?<=\\w)(?:\"|'')$", " »"),
                                ("(?<=\\w)(?:\"|'')(?=[] ,.:;?!…)])", " »"),
                                (u'(?<=[.!?…])" ', " » "),
                                (u'(?<=[.!?…])"$', " »") ],
    "ts_spell":               [ ("coeur", "cœur"), ("Coeur", "Cœur"),
                                ("coel(?=[aeio])", "cœl"), ("Coel(?=[aeio])", "Cœl"),
                                ("choeur", "chœur"), ("Choeur", "Chœur"),
                                ("foet", "fœt"), ("Foet", "Fœt"),
                                ("oeil", "œil"), ("Oeil", "Œil"),
                                ("oeno", "œno"), ("Oeno", "Œno"),
                                ("oesoph", "œsoph"), ("Oesoph", "Œsoph"),

>
>
>
>







 







|







 







|










|







|
|







1
2
3
4
5
6
7
8
9
10
11
12
..
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
..
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
#!python3

"""
Text formatter
"""

import re


dReplTable = {
    # surnumerary_spaces
    "start_of_paragraph":          [("^[  ]+", "")],
................................................................................
    # common
    "nbsp_titles":                 [("\\bM(mes?|ᵐᵉˢ?|grs?|ᵍʳˢ?|lles?|ˡˡᵉˢ?|rs?|ʳˢ?|M\\.) ", "M\\1 "),
                                    ("\\bP(re?s?|ʳᵉ?ˢ?) ", "P\\1 "),
                                    ("\\bD(re?s?|ʳᵉ?ˢ?) ", "D\\1 "),
                                    ("\\bV(ves?|ᵛᵉˢ?) ", "V\\1 ")],
    "nbsp_before_symbol":          [("(\\d) ?([%‰€$£¥˚Ω℃])", "\\1 \\2")],
    "nbsp_before_units":           [("(?<=[0-9⁰¹²³⁴⁵⁶⁷⁸⁹]) ?([kcmµn]?(?:[slgJKΩ]|m[²³]?|Wh?|Hz|dB)|[%‰]|°C)\\b", " \\1")],
    "nbsp_repair":                 [("(?<=[\\[(])[   ]([!?:;])", "\\1"),
                                    ("(https?|ftp)[   ]:(?=//)", "\\1:"),
                                    ("&([a-z]+)[   ];", "&\\1;"),
                                    ("&#([0-9]+|x[0-9a-fA-F]+)[   ];", "&#\\1;")],
    ## missing spaces
    "add_space_after_punctuation": [("([;!…])(?=\\w)", "\\1 "),
                                    ("[?](?=[A-ZÉÈÊÂÀÎ])", "? "),
                                    ("\\.(?=[A-ZÉÈÎ][a-zA-ZàâÂéÉèÈêÊîÎïÏôÔöÖûÛüÜùÙ])", ". "),
................................................................................
    "erase_non_breaking_hyphens":  [("­", "")],
    ## typographic signs
    "ts_apostrophe":          [ ("(?i)\\b([ldnjmtscç])['´‘′`](?=\\w)", "\\1’"),
                                ("(?i)(qu|jusqu|lorsqu|puisqu|quoiqu|quelqu|presqu|entr|aujourd|prud)['´‘′`]", "\\1’") ],
    "ts_ellipsis":            [ ("\\.\\.\\.", "…"),
                                ("(?<=…)[.][.]", "…"),
                                ("…[.](?![.])", "…") ],
    "ts_n_dash_middle":       [ (" [-—] ", " – "),
                                (" [-—],", " –,") ],
    "ts_m_dash_middle":       [ (" [-–] ", " — "),
                                (" [-–],", " —,") ],
    "ts_n_dash_start":        [ ("^[-—][  ]", "– "),
                                ("^– ", "– "),
                                ("^[-–—](?=[\\w.…])", "– ") ],
    "ts_m_dash_start":        [ ("^[-–][  ]", "— "),
                                ("^— ", "— "),
                                ("^«[  ][—–-][  ]", "« — "),
                                ("^[-–—](?=[\\w.…])", "— ") ],
    "ts_quotation_marks":     [ ('"(\\w+)"', "“$1”"),
                                ("''(\\w+)''", "“$1”"),
                                ("'(\\w+)'", "“$1”"),
                                ("^(?:\"|'')(?=\\w)", "« "),
                                (" (?:\"|'')(?=\\w)", " « "),
                                ("\\((?:\"|'')(?=\\w)", "(« "),
                                ("(?<=\\w)(?:\"|'')$", " »"),
                                ("(?<=\\w)(?:\"|'')(?=[] ,.:;?!…)])", " »"),
                                ('(?<=[.!?…])" ', " » "),
                                ('(?<=[.!?…])"$', " »") ],
    "ts_spell":               [ ("coeur", "cœur"), ("Coeur", "Cœur"),
                                ("coel(?=[aeio])", "cœl"), ("Coel(?=[aeio])", "Cœl"),
                                ("choeur", "chœur"), ("Choeur", "Chœur"),
                                ("foet", "fœt"), ("Foet", "Fœt"),
                                ("oeil", "œil"), ("Oeil", "Œil"),
                                ("oeno", "œno"), ("Oeno", "Œno"),
                                ("oesoph", "œsoph"), ("Oesoph", "Œsoph"),

Modified gc_lang/fr/oxt/Dictionnaires/dictionaries/README_dict_fr.txt from [3e9744d110] to [dee0f21f2d].

1
2
3
4
5
6
7
8
9
10
11
_______________________________________________________________________________

   DICTIONNAIRES ORTHOGRAPHIQUES FRANÇAIS
   version 6.3

   Olivier R. - dicollecte<at>free<dot>fr
   Dicollecte : http://www.dicollecte.org/

   Licence :
     MPL : Mozilla Public License
     version 2.0  --  http://www.mozilla.org/MPL/2.0/



|







1
2
3
4
5
6
7
8
9
10
11
_______________________________________________________________________________

   DICTIONNAIRES ORTHOGRAPHIQUES FRANÇAIS
   version 7.0

   Olivier R. - dicollecte<at>free<dot>fr
   Dicollecte : http://www.dicollecte.org/

   Licence :
     MPL : Mozilla Public License
     version 2.0  --  http://www.mozilla.org/MPL/2.0/

Modified gc_lang/fr/oxt/Dictionnaires/dictionaries/fr-classique.aff from [d72ddf000e] to [58e3622b9d].

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.

# AFFIXES DU DICTIONNAIRE ORTHOGRAPHIQUE FRANÇAIS “CLASSIQUE” v6.3
# par Olivier R. -- licence MPL 2.0
# Généré le 01-07-2018 à 20:45
# Pour améliorer le dictionnaire, allez sur http://www.dicollecte.org/



SET UTF-8

WORDCHARS -’'1234567890.




|

|







1
2
3
4
5
6
7
8
9
10
11
12
13
14
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.

# AFFIXES DU DICTIONNAIRE ORTHOGRAPHIQUE FRANÇAIS “CLASSIQUE” v7.0
# par Olivier R. -- licence MPL 2.0
# Généré le 14-09-2018 à 10:59
# Pour améliorer le dictionnaire, allez sur http://www.dicollecte.org/



SET UTF-8

WORDCHARS -’'1234567890.

Modified gc_lang/fr/oxt/Dictionnaires/dictionaries/fr-classique.dic from [f3b0b4c428] to [262be5a673].

1
2
3
4
5
6
7
8
....
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195

1196
1197
1198
1199
1200
1201
1202
....
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
....
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
....
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
....
2336
2337
2338
2339
2340
2341
2342

2343
2344
2345
2346
2347
2348
2349
....
2395
2396
2397
2398
2399
2400
2401

2402
2403
2404
2405
2406
2407
2408
....
3159
3160
3161
3162
3163
3164
3165
3166
3167
3168
3169
3170
3171
3172
3173
....
3473
3474
3475
3476
3477
3478
3479

3480
3481
3482
3483
3484
3485
3486
....
3499
3500
3501
3502
3503
3504
3505
3506
3507
3508
3509
3510
3511
3512
3513
....
3969
3970
3971
3972
3973
3974
3975

3976
3977
3978
3979
3980
3981
3982
....
4240
4241
4242
4243
4244
4245
4246
4247
4248
4249
4250
4251
4252
4253
4254
....
4312
4313
4314
4315
4316
4317
4318

4319
4320
4321
4322
4323
4324
4325
....
4420
4421
4422
4423
4424
4425
4426
4427
4428
4429
4430
4431
4432
4433
4434
....
4481
4482
4483
4484
4485
4486
4487
4488
4489
4490
4491
4492
4493
4494
4495
....
4682
4683
4684
4685
4686
4687
4688
4689
4690
4691
4692
4693
4694
4695
4696
....
4891
4892
4893
4894
4895
4896
4897

4898
4899
4900
4901
4902
4903
4904
....
5065
5066
5067
5068
5069
5070
5071

5072
5073
5074
5075
5076
5077
5078
....
5212
5213
5214
5215
5216
5217
5218

5219
5220
5221
5222
5223
5224
5225
....
5317
5318
5319
5320
5321
5322
5323
5324
5325
5326
5327
5328
5329
5330
5331
....
5521
5522
5523
5524
5525
5526
5527
5528
5529
5530
5531
5532
5533
5534
5535
5536
5537
5538
5539
5540

5541
5542
5543
5544
5545
5546
5547
....
5761
5762
5763
5764
5765
5766
5767

5768
5769
5770
5771
5772
5773
5774
....
5822
5823
5824
5825
5826
5827
5828
5829
5830
5831
5832
5833
5834
5835
5836
....
6349
6350
6351
6352
6353
6354
6355

6356
6357
6358
6359
6360
6361
6362
....
6583
6584
6585
6586
6587
6588
6589
6590
6591
6592
6593
6594
6595
6596
6597
6598
....
6725
6726
6727
6728
6729
6730
6731


6732
6733
6734
6735
6736
6737
6738
....
6859
6860
6861
6862
6863
6864
6865
6866
6867
6868
6869
6870
6871
6872
6873
6874
....
7070
7071
7072
7073
7074
7075
7076
7077
7078
7079
7080
7081
7082
7083
7084
....
7884
7885
7886
7887
7888
7889
7890
7891
7892
7893
7894


7895
7896
7897
7898
7899
7900
7901
....
8222
8223
8224
8225
8226
8227
8228

8229
8230
8231
8232
8233
8234
8235
....
8387
8388
8389
8390
8391
8392
8393
8394
8395
8396

8397
8398
8399
8400
8401
8402
8403
.....
11797
11798
11799
11800
11801
11802
11803

11804
11805
11806
11807
11808
11809
11810
.....
14297
14298
14299
14300
14301
14302
14303

14304
14305
14306
14307
14308
14309
14310
.....
14716
14717
14718
14719
14720
14721
14722

14723
14724
14725
14726
14727
14728
14729
.....
15009
15010
15011
15012
15013
15014
15015
15016
15017
15018
15019
15020


15021
15022
15023
15024
15025
15026
15027
.....
16478
16479
16480
16481
16482
16483
16484
16485
16486

16487
16488
16489
16490
16491
16492
16493
.....
17464
17465
17466
17467
17468
17469
17470

17471
17472
17473
17474
17475
17476
17477
.....
20143
20144
20145
20146
20147
20148
20149

20150
20151
20152
20153
20154
20155
20156
.....
20480
20481
20482
20483
20484
20485
20486
20487
20488
20489
20490
20491
20492
20493
20494
.....
21687
21688
21689
21690
21691
21692
21693
21694
21695
21696
21697
21698
21699
21700
21701
.....
21712
21713
21714
21715
21716
21717
21718

21719
21720
21721
21722
21723
21724
21725
.....
26296
26297
26298
26299
26300
26301
26302


26303
26304
26305
26306
26307
26308
26309
.....
26457
26458
26459
26460
26461
26462
26463
26464
26465
26466
26467
26468
26469
26470
26471

26472
26473
26474
26475
26476
26477
26478
.....
27450
27451
27452
27453
27454
27455
27456

27457
27458
27459
27460
27461
27462
27463
.....
28334
28335
28336
28337
28338
28339
28340

28341
28342
28343
28344
28345
28346
28347
.....
28804
28805
28806
28807
28808
28809
28810

28811
28812
28813
28814
28815
28816
28817
.....
31123
31124
31125
31126
31127
31128
31129



31130
31131
31132
31133
31134
31135
31136
.....
33479
33480
33481
33482
33483
33484
33485
33486
33487
33488
33489
33490
33491
33492
33493
33494
33495
33496
33497
33498
33499
33500
33501
33502
33503
.....
33527
33528
33529
33530
33531
33532
33533
33534
33535
33536
33537
33538
33539
33540
33541
.....
33586
33587
33588
33589
33590
33591
33592
33593
33594
33595
33596
33597
33598
33599
33600
33601





33602
33603
33604
33605
33606
33607
33608
.....
36225
36226
36227
36228
36229
36230
36231
36232
36233
36234
36235
36236

36237
36238
36239
36240
36241
36242
36243
.....
42717
42718
42719
42720
42721
42722
42723

42724
42725
42726
42727
42728
42729
42730
.....
42791
42792
42793
42794
42795
42796
42797

42798
42799
42800
42801
42802
42803
42804
.....
44115
44116
44117
44118
44119
44120
44121

44122
44123
44124
44125
44126
44127
44128
.....
46757
46758
46759
46760
46761
46762
46763
46764
46765
46766
46767
46768
46769
46770



46771
46772
46773
46774
46775
46776
46777
.....
47156
47157
47158
47159
47160
47161
47162
47163
47164
47165
47166
47167
47168
47169
47170
.....
47994
47995
47996
47997
47998
47999
48000

48001
48002
48003
48004
48005
48006
48007
.....
48815
48816
48817
48818
48819
48820
48821
48822
48823
48824
48825
48826
48827
48828
48829
48830
.....
49156
49157
49158
49159
49160
49161
49162


49163
49164
49165
49166
49167
49168
49169
.....
49804
49805
49806
49807
49808
49809
49810
49811
49812

49813
49814
49815
49816
49817
49818
49819
.....
50439
50440
50441
50442
50443
50444
50445
50446
50447
50448
50449
50450
50451
50452
50453
.....
50631
50632
50633
50634
50635
50636
50637

50638
50639
50640
50641
50642
50643
50644
.....
50984
50985
50986
50987
50988
50989
50990
50991
50992
50993
50994
50995
50996
50997
50998
.....
51230
51231
51232
51233
51234
51235
51236

51237
51238
51239
51240
51241
51242
51243
.....
53747
53748
53749
53750
53751
53752
53753



53754
53755
53756
53757
53758
53759
53760
.....
54393
54394
54395
54396
54397
54398
54399


54400
54401
54402
54403
54404
54405
54406
.....
54588
54589
54590
54591
54592
54593
54594
54595
54596
54597

54598
54599
54600
54601
54602
54603
54604
54605

54606
54607
54608
54609
54610
54611
54612
.....
54636
54637
54638
54639
54640
54641
54642
54643
54644
54645
54646
54647
54648
54649
54650
.....
54984
54985
54986
54987
54988
54989
54990

54991
54992
54993
54994
54995
54996
54997
.....
55040
55041
55042
55043
55044
55045
55046
55047

55048
55049
55050
55051
55052
55053
55054
.....
55232
55233
55234
55235
55236
55237
55238


55239
55240
55241
55242
55243
55244
55245
.....
56183
56184
56185
56186
56187
56188
56189

56190
56191
56192
56193
56194
56195
56196
.....
57966
57967
57968
57969
57970
57971
57972
57973
57974
57975

57976
57977
57978
57979
57980
57981
57982
.....
59272
59273
59274
59275
59276
59277
59278
59279
59280
59281
59282
59283
59284
59285
59286
59287
59288
.....
60069
60070
60071
60072
60073
60074
60075
60076
60077
60078
60079
60080
60081
60082
60083
60084

60085
60086
60087
60088
60089
60090
60091
60092
60093
60094
60095
60096
60097
60098
60099
60100
60101
60102

60103
60104
60105
60106
60107
60108
60109
60110
60111
60112
60113
60114
60115
60116

60117
60118
60119
60120
60121
60122
60123
.....
60292
60293
60294
60295
60296
60297
60298
60299
60300
60301
60302
60303
60304
60305
60306
60307

60308
60309
60310
60311
60312
60313
60314
.....
61513
61514
61515
61516
61517
61518
61519

61520
61521
61522
61523
61524
61525
61526
.....
61568
61569
61570
61571
61572
61573
61574
61575
61576

61577
61578
61579
61580
61581
61582
61583
.....
62641
62642
62643
62644
62645
62646
62647

62648
62649
62650
62651
62652
62653
62654
.....
62804
62805
62806
62807
62808
62809
62810
62811
62812
62813
62814
62815
62816
62817
62818
.....
62850
62851
62852
62853
62854
62855
62856

62857
62858
62859
62860
62861
62862
62863
.....
63543
63544
63545
63546
63547
63548
63549
63550
63551
63552
63553
63554
63555
63556
63557



63558
63559
63560
63561
63562
63563
63564
.....
63780
63781
63782
63783
63784
63785
63786
63787
63788
63789
63790
63791
63792
63793
63794
.....
63867
63868
63869
63870
63871
63872
63873
63874
63875
63876
63877
63878
63879
63880
63881
63882
.....
64024
64025
64026
64027
64028
64029
64030

64031
64032
64033
64034
64035
64036
64037
.....
64141
64142
64143
64144
64145
64146
64147

64148
64149
64150
64151
64152
64153
64154
64155
64156
64157
64158
64159
64160
64161
64162
64163
.....
64411
64412
64413
64414
64415
64416
64417
64418
64419
64420
64421
64422
64423
64424

64425
64426
64427
64428
64429
64430
64431
64432
64433
64434
64435
64436
64437
.....
64459
64460
64461
64462
64463
64464
64465
64466
64467
64468
64469
64470
64471
64472
64473
64474
64475
64476
64477
64478
64479

64480
64481
64482
64483
64484
64485
64486
.....
64573
64574
64575
64576
64577
64578
64579
64580
64581
64582
64583
64584
64585
64586
64587
64588
64589
64590
64591
64592


64593
64594
64595
64596
64597
64598
64599
.....
65317
65318
65319
65320
65321
65322
65323

65324
65325
65326
65327
65328
65329
65330
.....
65636
65637
65638
65639
65640
65641
65642

65643
65644
65645
65646
65647
65648
65649
.....
65984
65985
65986
65987
65988
65989
65990

65991
65992
65993
65994
65995
65996
65997
.....
67291
67292
67293
67294
67295
67296
67297

67298
67299
67300
67301
67302
67303
67304
.....
68784
68785
68786
68787
68788
68789
68790
68791
68792
68793
68794
68795
68796
68797
68798
.....
68826
68827
68828
68829
68830
68831
68832
68833
68834
68835
68836
68837
68838
68839
68840
.....
68938
68939
68940
68941
68942
68943
68944
68945
68946
68947
68948
68949
68950
68951
68952
.....
73717
73718
73719
73720
73721
73722
73723

73724
73725
73726
73727
73728
73729
73730
.....
73832
73833
73834
73835
73836
73837
73838

73839
73840
73841
73842
73843
73844
73845
.....
74236
74237
74238
74239
74240
74241
74242




74243
74244
74245
74246
74247
74248
74249
.....
75109
75110
75111
75112
75113
75114
75115
75116
75117
75118
75119
75120
75121
75122
75123
.....
75168
75169
75170
75171
75172
75173
75174
75175
75176
75177
75178
75179
75180
75181
75182
.....
75953
75954
75955
75956
75957
75958
75959
75960
75961
75962
75963
75964
75965
75966
75967
.....
76237
76238
76239
76240
76241
76242
76243
76244
76245
76246
76247
76248
76249
76250
76251
.....
76919
76920
76921
76922
76923
76924
76925

76926
76927
76928
76929
76930
76931
76932
.....
77564
77565
77566
77567
77568
77569
77570

77571
77572
77573
77574
77575
77576
77577
.....
77624
77625
77626
77627
77628
77629
77630

77631
77632
77633
77634
77635
77636
77637
.....
77857
77858
77859
77860
77861
77862
77863
77864
77865
77866
77867
77868
77869
77870
77871
77872
.....
78112
78113
78114
78115
78116
78117
78118





78119
78120
78121
78122
78123
78124
78125
.....
79510
79511
79512
79513
79514
79515
79516

79517
79518
79519
79520
79521
79522
79523
.....
79552
79553
79554
79555
79556
79557
79558

79559
79560
79561
79562
79563
79564
79565
.....
80101
80102
80103
80104
80105
80106
80107

80108
80109
80110
80111
80112
80113
80114
.....
80116
80117
80118
80119
80120
80121
80122
80123

80124
80125
80126
80127
80128
80129
80130
80822
&
1er/--
1ers/--
1re/--
1res/--
1ʳᵉ/--
1ʳᵉˢ/--
................................................................................
Bradley
Bradley
Brafman
Brahim
Brahma
Brahmapoutre
Brahms
Braine-l'Alleud
Braine-le-Château
Braine-le-Comte

Brakel
Brand
Brandon
Brasov
Brassac
Brasschaat
Brassica
................................................................................
Casanova
Casey
Casimir
Casimir-Perier
Caspienne
Cassandra
Cassandre
Casseurs_Flowters
Cassidy
Cassini
Cassiopée
Castafolte
Castanet-Tolosan
Castelnaudary
Castelnau-le-Lez
................................................................................
Charybde
Chase
Chasles
Chastel-Arnaud
Château-Gontier
Château-Thierry
Châteaubriant
Château-d'Œx
Château-d'Olonne
Châteaudouble
Châteaudun
Châteauguay
Châteauneuf-du-Pape
Châteauneuf-les-Martigues
Châteaurenard
Châteauroux
Châtelain
Châtelet
................................................................................
DEUG
DFSG
DG
DGSE
DGSI
DHCP
DHEA
D'Holbach
DJ
DM
DNS
DOM
DOM-TOM
DPTH
DREES
................................................................................
Dynkin
Dysnomie
Dʳ
Dʳˢ
Dʳᵉ
Dʳᵉˢ
Dᴏꜱꜱᴍᴀɴɴ

ECS/L'D'Q'
EDF/L'D'Q'
EEPROM/L'D'Q'
EFREI/L'D'Q'
EFS/L'D'Q'
EIB/L'D'Q'
ENA/L'D'Q'
................................................................................
Eeklo/L'D'Q'
Eeyou/L'
Effinergie
Égée/L'D'Q'
Éghezée/L'D'Q'
Églantine/L'D'Q'
Égypte/L'D'

Ehrenpreis/L'D'Q'
Ehresmann/L'D'Q'
Eibit/||--
Eiffel/L'D'Q'
Eileen/L'D'Q'
Eilenberg/L'D'Q'
Eilleen/L'D'Q'
................................................................................
Goebbels
Goëmar
Goethe
Gogh
Gogol
Golan
Goldbach
Golden_Show
Goldoni
Golgi
Golgotha
Goliath
Gomorrhe
Goncourt
Gondwana
................................................................................
Helvétie/L'D'
Hem/L'D'Q'
Hemiksem/L'D'Q'
Hemingway/L'D'Q'
Henan
Hénault
Hendaye/L'D'Q'

Hénin-Beaumont/L'D'Q'
Hennebont/L'D'Q'
Hénoch/L'D'Q'
Henri/L'D'Q'
Henriette/L'D'Q'
Henrique/L'D'Q'
Henry
................................................................................
Héricourt/L'D'Q'
Hermann/L'D'Q'
Hermès/L'D'Q'
Hermine/L'D'Q'
Hermione/L'D'Q'
Hermite/L'D'Q'
Hernando/L'D'Q'
Hero_Corp
Hérode/L'D'Q'
Hérodote/L'D'Q'
Hérouville-Saint-Clair/L'D'Q'
Herschel/L'D'Q'
Herselt/L'D'Q'
Herstal
Hertz
................................................................................
Joanna
Joannie
Joaquim
Jocelyn
Jocelyne
Joconde
Jocrisse

Jodie
Jodoigne
Jody
Joe
Joël
Joëlle
Joey
................................................................................
Kjeldahl
Klaus
Klee
Klein
Klimt
Klitzing
Klondike
K'nex
Knokke-Heist
Knossos
Ko/||--
Kobe
Koch
Kodaira
Koekelberg
................................................................................
Kuurne
Kyle
Kylian
Kylie
Kyllian
Kyoto
Kyushu

L/U.||--
LCD
LED
LGBT
LGBTI
LGBTIQ
LGV
................................................................................
Laval
Lavaur
Laveran
Lavoisier
Lawrence
Laxou
Lazare
Le_Bris
Léa
Leah
Léandre
Léane
Lebbeke
Lebesgue
Lebrun
................................................................................
Léonore
Léontine
Léopold
Léopoldine
Leopoldt
Léopoldville
Leroy
Les_Vigneaux
Lesage
Lesbos
Lesieur
Lesley
Leslie
Lesneven
Lesotho
................................................................................
Louvain-la-Neuve
Louvière
Louviers
Louvre
Love
Lovecraft
Lovelace
Lovely_Rita
Lovćen
Loyola
Loyre
Lozère
Luanda
Lubbeek
Lübeck
................................................................................
Mammon
Manach
Managua
Manama
Manaus
Manche
Manchester

Mandchourie
Mandela
Mandelbrot
Mandelieu-la-Napoule
Mandor
Mandy
Manet
................................................................................
Maslow
Mason
Massachusetts
Masséna
Massenet
Massimo
Massy

Masutti
Matchstick
Mateo
Mathéo
Matheron
Matheson
Mathias
................................................................................
Mérimée
Merkel
Merleau-Ponty
Merlin
Méru
Meryl
Mésie

Mésopotamie
Messaline
Messer
Messine
Météo-France
Mettet
Metz
................................................................................
Mithra
Mithridate
Mitnick
Mitry-Mory
Mitsubishi
Mittelhausbergen
Mitterrand
Mix_Bizarre
Miyabi
Mlle/S.
Mme/S.
Mnémosyne
Mo/||--
Moab
Möbius
................................................................................
Mᵍʳˢ
Mᵐᵉ
Mᵐᵉˢ
N/U.||--
NASA
NDLR
NDT
N'Djamena
NEC
NF
NIRS
NSA
Nabil
Nabuchodonosor
Nacira
Nadège
Nadia
Nadim
Nadine
Nadir

Nagasaki
Nagata
Nagoya
Nagy
Nahum
Naimark
Nairobi
................................................................................
Nusselt
Nuuk
Nvidia
Nyarlathotep
Nyons
Nyquist
Nyx

OCDE/L'D'Q'
OCaml/L'D'Q'
ODF/L'D'Q'
Œdipe/L'D'Q'
OFBiz/D'Q'
OFCE/L'D'Q'
OGM/L'D'Q'
................................................................................
Oignies/L'D'Q'
Oisans/L'
Oise/L'
Oissel/L'D'Q'
Oklahoma/L'D'
Olaf/L'D'Q'
Oldham/L'D'Q'
Olea_Medical
Oleg/L'D'Q'
Olen/L'D'Q'
Oléron/L'D'Q'
Olga/L'D'Q'
Oliver/L'D'Q'
Olivet/L'D'Q'
Olivia/L'D'Q'
................................................................................
Pullman
Pune
Purcell
Puteaux
Puurs
Puy-de-Dôme
Puy-en-Velay

Pyongyang
Pyrénées
Pyrénées-Atlantiques
Pyrénées-Orientales
Pyrrha
Pyrrhus
Pythagore
................................................................................
Rivery
Riviera
Rivière-Pilote
Rivière-Salée
Rixensart
Rixheim
Riyad
R'lyeh
R'n'B
Roanne
Rob
Robert
Roberta
Roberte
Roberto
Roberval
................................................................................
Ruth
Rutherford
Rutishauser
Rwanda
Ryan
Ryanair
Ryxeo


S/U.||--
SA
SADT
SAP
SARL
SCIC
SCOT
................................................................................
Saint-Louis
Saint-Malo
Saint-Mandé
Saint-Marin
Saint-Martin
Saint-Martin-Boulogne
Saint-Martin-Petit
Saint-Martin-d'Hères
Saint-Martin-de-Crau
Saint-Maur-des-Fossés
Saint-Maurice
Saint-Max
Saint-Maximin-la-Sainte-Baume
Saint-Médard-en-Jalles
Saint-Michel-de-Feins
Saint-Michel-sur-Orge
................................................................................
Schwerin
Schwytz
Schwyz
Scipion
Scott
Scoville
Scrameustache/S.
Scred_TV
Scudéry
Scylla
SeaMonkey
Seagate
Seamus
Sean
Seat
................................................................................
Vachez
Vadim
Vaduz
Vahan
Vaires-sur-Marne
Valais
Valbonne
Val-d'Oise
Val-d'Or
Val-de-Marne
Val-de-Reuil


Valence
Valenciennes
Valentigney
Valentin
Valentina
Valentine
Valentinien
................................................................................
Wilfred
Wilfrid
Wilfried
Wilhelm
Will
Willa
Willebroek

William
Williams
Willie
Willy
Wilma
Wilson
Windhoek
................................................................................
Xavière
Xe/--
Xebia
Xenia
Xénophane
Xénophon
Xerxès
Xi'an
Xining
Xinjiang

Xᵉ/--
YHWH
Yacine
Yaël
Yaëlle
Yahvé
Yahweh
................................................................................
annulable/S*
annulaire/S*
annulation/S*
annulative/F*
annulatrice/F*
annulement/S*
annuler/a4p+

annuus
anoblir/f4p+
anoblissante/F*
anoblissement/S*
anode/S*
anodine/F*
anodique/S*
................................................................................
autoclave/S*
autoclave/S*
autoclaviste/S*
autocollante/F*
autocommutateur/S*
autocompenser/a4p+
auto-compenser/a4p+

autoconcurrence/S*
autoconditionnement/S*
auto-conditionnement/S*
autoconduction/S*
autoconservation/S*
autoconsommation/S*
autoconstruction/S*
................................................................................
avionneuse/F*
avions-cargos/D'Q'
avipelvien/S*
aviron/S*
avirulence/S*
avis/L'D'Q'
aviser/a4p+

aviso/S*
avitaillement/S*
avitailler/a4p+
avitailleuse/F*
avitaminose/S*
avivage/S*
avivement/S*
................................................................................
baguenauder/a0p+
baguenaudier/S.
baguer/a0p+
baguette/S.
baguier/S.
baguiste/S.
bah
baha'ie/F.
bahaïe/F.
baha'isme/S.
bahaïsme/S.
bahamienne/F.


bahreïnie/F.
baht/S.
bahut/S.
bahutage/S.
baie/F.
baignade/S.
baigner/a0p+
................................................................................
binocle/S.
binoculaire/S.
binodale/S.
binôme/S.
binomiale/W.
binominale/W.
binouze/S.
bin's
bintje/S.

bio
bio/S.
bioabsorbable/S.
bioaccumulable/S.
bioaccumulation/S.
bioacoustique/S.
bioagresseur/S.
................................................................................
boui-boui
bouif/S.
bouillabaisse/S.
bouillage/S.
bouillante/F.
bouillasse/S.
bouille/S.

bouilleuse/F.
bouillie/S.
bouillir/iQ
bouillissage/S.
bouilloire/S.
bouillon/S.
bouillonnante/F.
................................................................................
caviarder/a0p+
cavicorne/S.
caviste/S.
cavitaire/S.
cavitation/S.
cavité/S.
cd/U.||--

ce
céans
cébette/S.
cébiste/S.
ceci
cécidie/S.
cécidomyie/S.
................................................................................
cesse
cesser/a0p+
cessez-le-feu
cessibilité/S.
cessible/S.
cession/S.
cessionnaire/S.
c'est-à-dire
ceste/S.
cestode/S.
césure/S.
cet
cétacé/S.
cétane/S.
céteau/X.
................................................................................
chrysostome/S.
chrysothérapie/S.
chrysotile/S.
chtarbée/F.
chthonienne/F.
chti/S.
chtimi/S.
ch'timi/S.
chtouille/S.
chuchotage/S.
chuchotante/F.
chuchotement/S.
chuchoter/a0p+
chuchoterie/S.
chuchoteuse/F.
................................................................................
chutney/S.
chyle/S.
chylifère/S.
chyme/S.
chymotrypsine/S.
chyprée/F.
chypriote/S.

ci
ciabatta/S.
ciao
ci-après
ci-avant
cibiche/S.
cibiste/S.
................................................................................
cytosquelette/S.
cytostatique/S.
cytotoxicité/S.
cytotoxique/S.
czar/S.
czardas
czimbalum/S.


d
d/||--
dB/||--
daba/S.
dacite/S.
dacryoadénite/S.
dacryocystite/S.
................................................................................
datte/S.
dattier/S.
datura/S.
daube/S.
dauber/a0p+
daubeuse/F.
daubière/S.
d'aucuns
dauphine/F.
dauphinelle/S.
dauphinoise/F.
daurade/S.
davantage
davier/S.
dazibao/S.

de
dé/S.
déactiver/a0p+
deal/S.
dealer/S.
dealer/a0p+
déambulateur/S.
................................................................................
dégeler/b0p+
dégénération/S.
dégénérative/F.
dégénérée/F.
dégénérer/c0p+
dégénérescence/S.
dégénérescente/F.

dégerbage/S.
dégermage/S.
dégermer/a0p+
dégingander/a0p+
dégîter/a0p+
dégivrage/S.
dégivrante/F.
................................................................................
dépoitrailler/a0p+
dépolarisation/S.
dépolariser/a0p+
dépolir/f0p+
dépolissage/S.
dépolitisation/S.
dépolitiser/a0p+

dépolluer/a0p+
dépollution/S.
dépolymérisation/S.
dépolymériser/a0p+
déponente/F.
dépontiller/a0p.
dépopulation/S.
................................................................................
désensibilisation/S.
désensibiliser/a0p+
désensorceler/d0p+
désentoilage/S.
désentoiler/a0p+
désentortiller/a0p+
désentraver/a0p+

désenvasement/S.
désenvaser/a0p+
désenvelopper/a0p+
désenvenimer/a0p+
désenverguer/a0p+
désenvoûtement/S.
désenvoûter/a0p+
................................................................................
dystrophine/S.
dystrophique/S.
dystrophisation/S.
dysurie/S.
dysurique/S.
dytique/S.
dzêta



e
eV/U.||--
eau/X*
eau-de-vie/L'D'Q'
eau-forte/L'D'Q'
eaux-de-vie/D'Q'
eaux-fortes/D'Q'
................................................................................
entraccorder/a6p+
entraccuser/a6p+
entracte/S*
entradmirer/a6p+
entraide/S*
entraider/a6p+
entrailles/D'Q'
entr'aimer/a6p+
entrain/S*
entraînable/S*
entraînante/F*
entraînement/S*
entraîner/a4p+
entraîneuse/F*
entrait/S*
entrante/F*
entrapercevoir/pK
entr'apercevoir/pK
entrave/S*
entraver/a2p+
entravon/S*
entraxe/S*
entre/D'Q'Qj
entre-axes/L'D'Q'
entrebâillement/S*
................................................................................
entrées-sorties
entrefaite/S*
entrefaites
entrefer/S*
entrefilet/S*
entre-frapper/a6p+
entregent/S*
entr'égorger/a6p+
entre-haïr/fB
entre-heurter/a6p+
entrejambe/S*
entre-jambe/S*
entrelacement/S*
entrelacer/a4p+
entrelacs/L'D'Q'
................................................................................
entre-tuer/a6p+
entrevoie/S*
entre-voie/S*
entrevoir/pF
entrevous/L'D'Q'
entrevoûter/a2p+
entrevue/S*
entr'hiverner
entrisme/S*
entropie/S*
entropion/S*
entropique/S*
entroque/S*
entrouvrir/iC
entr'ouvrir/iC
entrure/S*





entuber/a2p+
enturbanner/a4p+
enture/S*
énucléation/S*
énucléer/a2p+
énumérabilité/S*
énumérable/S*
................................................................................
fécule/S.
féculence/S.
féculent/S.
féculente/F.
féculer/a0p+
féculerie/S.
féculière/F.
feda'i
fedayin
fedayin/S.
fedda'i
feddayin

fédérale/W.
fédéralisation/S.
fédéraliser/a0p+
fédéralisme/S.
fédéraliste/S.
fédération/S.
fédérative/F.
................................................................................
hypercentre/S*
hyperchimie/S*
hyperchlorhydrie/S*
hypercholestérolémie/S*
hypercholestérolémique/S*
hyperchrome/S*
hyperchromie/S*

hypercomplexe/S*
hyperconformisme/S*
hyperconnectée/F*
hypercontinentale/W*
hypercontrôle/S*
hypercorrecte/F*
hypercorrection/S*
................................................................................
hypernova/L'D'Q'
hypernovæ/D'Q'
hypéron/S*
hyperonyme/S*
hyperonymie/S*
hyperonymique/S*
hyperostose/S*

hyperparasite/S*
hyperparathyroïdie/S*
hyperphagie/S*
hyperphagique/S*
hyperphalangie/S*
hyperplan/S*
hyperplasie/S*
................................................................................
incrémentale/W*
incrémentalement/D'Q'
incrémentation/S*
incrémenter/a2p+
incrémentielle/F*
increvable/S*
incriminable/S*

incrimination/S*
incriminer/a4p+
incristallisable/S*
incritiquable/S*
incrochetable/S*
incroyable/S*
incroyablement/D'Q'
................................................................................
juron/S.
jury/S.
jus
jusant/S.
jusée/S.
jusnaturalisme/S.
jusnaturaliste/S.
jusqu/--
jusqu'au-boutisme/S.
jusqu'au-boutiste/S.
jusque
jusque-là
jusques
jusquiame/S.



jussiée/S.
jussion/S.
justaucorps
juste
juste/S.
juste-à-temps
justement
................................................................................
kyrie
kyrielle/S.
kyriologique/S.
kyste/S.
kystique/S.
kyu/S.
kyudo/S.
l
l
l/U.||--
là
la
la
la
labadens
................................................................................
leude/S.
leur
leur
leur/S.
leurre/S.
leurrer/a0p+
leurs

lev/S.
levage/S.
levageuse/F.
levain/S.
levalloisien/S.
levalloisienne/F.
lévamisole/S.
................................................................................
loricaire/S.
lorientaise/F.
loriot/S.
loriquet/S.
lorraine/F.
lorry/A.
lors
lorsqu/--
lorsque
losange/S.
losangée/F.
losangique/S.
loser/S.
lot/S.
loterie/S.
lotier/S.
................................................................................
lysogénie/S.
lysogénique/S.
lysosomale/W.
lysosome/S.
lysosomiale/W.
lysozyme/S.
lytique/S.


m
m/U.||--
mCE
mR/||--
ma
maar/S.
maboule/F.
................................................................................
mammite/S.
mammographe/S.
mammographie/S.
mammoplastie/S.
mammouth/S.
mammy/S.
mamours
mam'selle/S.
mamy/S.

mam'zelle/S.
man/S.
mana/S.
manade/S.
manadière/F.
management/S.
manager/S.
................................................................................
mastopathie/S.
mastose/S.
mastroquet/S.
masturbation/S.
masturbatoire/S.
masturbatrice/F.
masturber/a0p+
m'as-tu-vu
masure/S.
masurium/S.
mât/S.
matabiche/S.
matabicher/a0p+
matador/S.
mataf/S.
................................................................................
mazouter/a0p+
mazurka/S.
mbalax
mbar/||--
me
mea-culpa
méandre/S.

méandriforme/S.
méandrine/S.
méat/S.
méatoscopie/S.
mébibit/S.
mébioctet/S.
mec/S.
................................................................................
mémoration/S.
mémorial/X.
mémorialiste/S.
mémorielle/F.
mémorisable/S.
mémorisation/S.
mémoriser/a0p+
m'en
menaçante/F.
menace/S.
menacer/a0p+
ménade/S.
ménage/S.
ménageable/S.
ménagement/S.
................................................................................
mésestimer/a0p+
mésiale/W.
mésintelligence/S.
mésinterprétation/S.
mésique/S.
mesmérienne/F.
mesmérisme/S.

mésoblaste/S.
mésoblastique/S.
mésocarpe/S.
mésocentre/S.
mésocéphale/S.
mésocéphale/S.
mésocéphalique/S.
................................................................................
myxœdème/S.
myxomatose/S.
myxome/S.
myxomycète/S.
myxovirus
m²
m³



n
na
naan/S.
nabab/S.
nabatéenne/F.
nabi/S.
nabisme/S.
................................................................................
neurinome/S.
neuroanatomie/S.
neuro-anatomie/S.
neuroanatomique/S.
neuro-anatomique/S.
neuroanatomiste/S.
neuro-anatomiste/S.


neurobiochimie/S.
neurobiochimique/S.
neurobiochimiste/S.
neurobiologie/S.
neurobiologique/S.
neurobiologiste/S.
neuroblaste/S.
................................................................................
nicotiniser/a0p+
nicotinisme/S.
nictation/S.
nictitante/F.
nictitation/S.
nid/S.
nidation/S.
nid-d'abeilles
nid-de-pie
nid-de-poule

nidicole/S.
nidification/S.
nidificatrice/F.
nidifier/a0p.
nidifuge/S.
nids-d'abeilles
nids-de-pie
nids-de-poule

nièce/S.
niellage/S.
nielle/S.
nieller/a0p+
nielleur/S.
niellure/S.
nier/a0p+
................................................................................
nilvariété/S.
nimbe/S.
nimber/a0p+
nimbostratus
nimbo-stratus
nimbus
nîmoise/F.
n'importe
ninas
ninja/S.
ninjato/S.
niobate/S.
niobite/S.
niobium/S.
niôle/S.
................................................................................
nosologie/S.
nosologique/S.
nosophobie/S.
nostalgie/S.
nostalgique/S.
nostalgiquement
nostoc/S.

notabilité/S.
notable/S.
notable/S.
notablement
notaire/S.
notairesse/S.
notamment
................................................................................
nourrissante/F.
nourrissement/S.
nourrisseur/S.
nourrisson/S.
nourriture/S.
nous
nous
nous-même/S=

nouure/S.
nouveau-née/F.
nouveauté/S.
nouvel
nouvelle/W.
nouvellement
nouvelleté/S.
................................................................................
nymphette/S.
nympho/S.
nymphomane/S.
nymphomanie/S.
nymphoplastie/S.
nymphose/S.
nystagmus


ô
o
o/||--
oaï/S*
oaristys/L'D'Q'
oasienne/F*
oasis/L'D'Q'
................................................................................
orchestration/S*
orchestratrice/F*
orchestre/S*
orchestrer/a2p+
orchidacée/S*
orchidacée/S*
orchidée/S*

orchi-épididymite/S*
orchis/L'D'Q'
orchite/S*
ordalie/S*
ordalique/S*
ordi/S*
ordinaire/S*
................................................................................
pas-à-pas
pas-à-pas
pascal/Um
pascale/F.
pascalienne/F.
pascaline/S.
pascaux
pas-d'âne
pas-de-géant
pas-de-porte

paseo/S.
pashmina/S.
pasionaria/S.
paso-doble
pasquin/S.
pasquinade/S.
passable/S.
................................................................................
pétiole/S.
pétiolée/F.
petiote/F.
petit-beurre
petit-bois
petit-bourgeois
petit-boutiste/S.
petit-déj'
petit-déjeuner
petit-déjeuner/a0p.
petite/F.
petite-bourgeoise
petite-fille
petite-maîtresse
petitement
petite-nièce
petites-bourgeoises
................................................................................
pie
pie/S.
pièce/S.
piécette/S.
pied/S.
pied-à-terre
pied-bot
pied-d'alouette
pied-de-biche
pied-de-cheval
pied-de-chèvre
pied-de-loup
pied-de-mouton
pied-de-poule
pied-de-veau
pied-d'oiseau

pied-droit
piédestal/X.
pied-fort
piedmont/S.
pied-noir
piédouche/S.
pied-plat
piédroit/S.
pieds-bots
pieds-d'alouette
pieds-de-biche
pieds-de-cheval
pieds-de-chèvre
pieds-de-loup
pieds-de-mouton
pieds-de-poule
pieds-de-veau
pieds-d'oiseau

pieds-droits
pieds-forts
pieds-noirs
pieds-plats
piéfort/S.
piège/S.
piégeable/S.
piégeage/S.
piéger/c0p+
piégeuse/F.
piégeuse/W.
pie-grièche
piémont/S.
piémontaise/F.

piercing/S.
piéride/S.
pierrade/S.
pierrage/S.
pierraille/S.
pierre/S.
pierrer/a0p+
................................................................................
pinière/S.
pinne/S.
pinnipède/S.
pinnothère/S.
pinnule/S.
pinocytose/S.
pinot/S.
pin's
pinson/S.
pintade/S.
pintadeau/X.
pintadine/S.
pinte/S.
pinter/a0p+
pin-up
pinyin

piochage/S.
pioche/S.
piochement/S.
piocher/a0p+
piocheuse/F.
pioger/a0p.
piolet/S.
................................................................................
pompeuse/F.
pompeuse/W.
pompeusement
pompière/F.
pompiérisme/S.
pompile/S.
pompiste/S.

pompon/S.
pomponner/a0p+
ponant/S.
ponantaise/F.
ponçage/S.
ponce/S.
ponce/S.
................................................................................
pontifiante/F.
pontificale/W.
pontificalement
pontificat/S.
pontifier/a0p.
pontil/S.
pontiller/a0p+
pont-l'évêque
pont-levis

ponton/S.
pontonnier/S.
pont-promenade
ponts-levis
ponts-promenades
pontuseau/X.
pool/S.
................................................................................
président-directeur
présidente/F.
présidente-directrice
présidentes-directrices
présidentiable/S.
présidentialisation/S.
présidentialisme/S.

présidentielle/F.
présidents-directeurs
présider/a0p+
présidial/X.
présidiale/W.
présidialité/S.
présidium/S.
................................................................................
priapisme/S.
prie-Dieu
prier/a0p+
prière/S.
prieure/F.
prieuré/S.
prieuse/S.
prim'Holstein
prima-donna/I.
primage/S.
primaire/S.
primairement
primale/W.
primalité/S.
primarisation/S.
................................................................................
primo-délinquante/F.
primogéniture/S.
primo-infection/S.
primordiale/W.
primordialement
primordialité/S.
primulacée/S.

prince-de-galles
prince-de-galles
princeps
princeps
princesse/F.
princière/F.
princièrement
................................................................................
pruche/S.
prude/S.
prudemment
prudence/S.
prudente/F.
prudentielle/F.
pruderie/S.
prud'homale/W.
prud'homie/S.
prudhommale/W.
prudhomme/S.
prud'homme/S.
prudhommerie/S.
prudhommesque/S.
prudhommie/S.



pruine/S.
prune
prune/S.
pruneau/X.
prunelaie/S.
prunelée/S.
prunelle/S.
................................................................................
ptérosaure/S.
ptérosaurien/S.
ptérygion/S.
ptérygoïde/S.
ptérygoïdienne/F.
ptérygote/S.
ptérygotus
p'tite/F.
ptolémaïque/S.
ptoléméenne/F.
ptomaïne/S.
ptôse/S.
ptosis
ptôsis
ptyaline/S.
................................................................................
puis
puisage/S.
puisard/S.
puisatier/S.
puisement/S.
puiser/a0p+
puisette/S.
puisqu/--
puisque
puissamment
puissance/S.
puissante/F.
puits
pulicaire/S.
pulicaire/S.
pull/S.
................................................................................
pycnogonide/S.
pycnomètre/S.
pycnose/S.
pycnotique/S.
pyélite/S.
pyélonéphrite/S.
pygargue/S.

pygmée/S.
pygméenne/F.
pyjama/S.
pylône/S.
pylore/S.
pylorique/S.
pyocyanique/S.
................................................................................
pythique/S.
pythique/S.
python/S.
pythonisse/S.
pyurie/S.
pyxide/S.
pz/||--

q
qPCR
qanat/S.
qat/S.
qatarie/F.
qatarienne/F.
qbit/S.
qi
qu/--
qua
quad/S.
quadra
quadra/S.
quadragénaire/S.
quadragésimale/W.
quadragésime/S.
................................................................................
québécoise/F.
quebracho/S.
quechua/S.
queen/S.
queer/S.
quelconque/S.
quelle/F.
quelqu/--
quelque
quelque/S.
quelque/S.
quelquefois
quelques-unes
quelques-uns

quelqu'un
quelqu'une
quémande/S.
quémander/a0p+
quémandeuse/F.
qu'en-dira-t-on
quenelle/S.
quenotte/S.
quenouille/S.
quenouillée/S.
quenouillette/S.
quéquette/S.
quérable/S.
................................................................................
quêteuse/F.
quetsche/S.
quetschier/S.
quetter/a0p+
quetzal/S.
quetzales
queue/S.
queue-d'aronde
queue-de-cheval
queue-de-cochon
queue-de-morue
queue-de-pie
queue-de-rat
queue-de-renard
queues-d'aronde
queues-de-cheval
queues-de-cochon
queues-de-morue
queues-de-pie
queues-de-rat
queues-de-renard

queuillère/S.
queursage/S.
queusot/S.
queutarde/F.
queuter/a0p+
queux
qui
................................................................................
quiz
quizalofop/S.
quo
quoailler
quoc-ngu
quôc-ngu
quoi
quoiqu/--
quoique
quolibet/S.
quorum/S.
quota/S.
quote-part
quotes-parts
quotidienne/F.
quotidiennement
quotidienneté/S.
quotient/S.
quotité/S.
quotter/a0p.


qwerty
r
ra
rab/S.
rabâchage/S.
rabâchement/S.
rabâcher/a0p+
................................................................................
ratichon/S.
raticide/S.
raticide/S.
ratière/F.
ratification/S.
ratifier/a0p+
ratinage/S.

ratiner/a0p+
rating/S.
ratio/S.
ratiocinante/F.
ratiocination/S.
ratiociner/a0p.
ratiocineuse/F.
................................................................................
recéder/c0p+
recel/S.
recèlement/S.
receler/b0p+
receleuse/F.
récemment
récence/S.

recensement/S.
recenser/a0p+
recenseuse/F.
recension/S.
récente/F.
recentrage/S.
recentrement/S.
................................................................................
récuser/a0p+
recyclabilité/S.
recyclable/S.
recyclage/S.
recycler/a0p+
recyclerie/S.
recycleuse/F.

rédaction/S.
rédactionnel/S.
rédactionnelle/F.
rédactrice/F.
redan/S.
reddition/S.
redéclarer/a0p+
................................................................................
résistible/S.
résistive/F.
résistivité/S.
résistor/S.
resituer/a0p+
resocialisation/S.
resocialiser/a0p+

résolubilité/S.
résoluble/S.
résolument
résolution/S.
résolutive/F.
résolutoire/S.
résolvance/S.
................................................................................
rythmicité/S.
rythmique/S.
rythmiquement
s
s/U.||--
sa
saanen/S.
s'abader
sabayon/S.
sabbat/S.
sabbathienne/F.
sabbatique/S.
sabéenne/F.
sabéisme/S.
sabelle/S.
................................................................................
sabouler/a0p+
sabra/S.
sabrage/S.
sabre/S.
sabrer/a0p+
sabretache/S.
sabreuse/F.
s'abriller
saburrale/W.
saburre/S.
sac/S.
sacagner/a0p+
saccade/S.
saccader/a0p+
saccage/S.
................................................................................
sagard/S.
sage/S.
sage-femme
sagement
sages-femmes
sagesse/S.
sagette/S.
s'agir/fZ
sagittaire/S.
sagittale/W.
sagittée/F.
sagou/S.
sagouin/S.
sagoutier/S.
sagum/S.
................................................................................
surharmonique/S.
surhaussement/S.
surhausser/a0p+
surhomme/S.
surhumaine/F.
surhumainement
surhumanité/S.

suricate/S.
surie/F.
surikate/S.
surimi/S.
surimposer/a0p+
surimposition/S.
surimpression/S.
................................................................................
surplomber/a0p+
surplus
surpoids
surpopulation/S.
surprenamment
surprenante/F.
surprendre/tF

surpresseur/S.
surpression/S.
surprime/S.
surprise/S.
surprise-partie
surprises-parties
surproduction/S.
................................................................................
systémicienne/F.
systémique/S.
systole/S.
systolique/S.
systyle/S.
systyle/S.
syzygie/S.




t
t/||--
ta
tabac
tabac/S.
tabacologie/S.
tabacologue/S.
................................................................................
télex
télexer/a0p+
télexiste/S.
télicité/S.
tell/S.
telle/F.
telle/F.
t'elle/S=
tellement
tellière/S.
tellière/S.
telline/S.
tellurate/S.
tellure/S.
tellureuse/W.
................................................................................
temporellement
temporisation/S.
temporisatrice/F.
temporiser/a0p+
temporo-pariétale/F.
temps
temps-réel
t'en
tenable/S.
tenace/S.
tenacement
ténacité/S.
tenaille/S.
tenaillement/S.
tenailler/a0p+
................................................................................
tignasse/S.
tigrer/a0p+
tigresse/F.
tigridie/S.
tigron/S.
tiguidou/S.
tiki/S.
t'il/S=
tilapia/S.
tilbury/S.
tilde/S.
tiliacée/S.
tillac/S.
tillage/S.
tillandsie/S.
................................................................................
tomodensitomètre/S.
tomodensitométrie/S.
tomodensitométrique/S.
tomographe/S.
tomographie/S.
tomographique/S.
tom-pouce
t'on
ton
ton/S.
tonale/F.
tonalité/S.
tondage/S.
tondaille/S.
tondaison/S.
................................................................................
transistorisation/S.
transistoriser/a0p+
transit/S.
transitaire/S.
transiter/a0p+
transition/S.
transitionnelle/F.

transitive/F.
transitivement
transitivité/S.
transitoire/S.
transitoirement
translater/a0p+
translatif/S.
................................................................................
triterpénique/S.
trithérapie/S.
triticale/S.
tritiée/F.
tritium/S.
triton/S.
triturable/S.

triturateur/S.
trituration/S.
triturer/a0p+
triumvir/S.
triumvirale/W.
triumvirat/S.
trivalence/S.
................................................................................
trois-quarts
trois-quatre
trois-six
trôler/a0p.
troll/S.
trolle/S.
troller/a0p.

trolley/S.
trolleybus
trombe/S.
trombidion/S.
trombidiose/S.
trombine/S.
trombinoscope/S.
................................................................................
tubuline/S.
tubulopathie/S.
tubulure/S.
tudesque/S.
tudieu
tue-chien
tue-diable
tue-l'amour
tue-loup
tue-mouche
tue-mouches
tuer/a0p+
tuerie/S.
tue-tête
tueuse/F.
tuf/S.
................................................................................
tyrosinémie/S.
tyrothricine/S.
tyrrhénienne/F.
tzar/S.
tzarine/S.
tzatziki/S.
tzigane/S.





u
u/||--
ua/||--
ubac/S*
ubérale/S*
uberisation/S*
ubique/S*
................................................................................
vidéographique/S.
vidéoludique/S.
vidéophone/S.
vidéophonie/S.
vidéoprojecteur/S.
vidéoprojection/S.
vidéoprotection/S.

vide-ordures
vidéosphère/S.
vidéosurveillance/S.
vidéotex
vidéothécaire/S.
vidéothèque/S.
vidéotransmission/S.
................................................................................
vielleur/S.
vielleuse/W.
vielliste/S.
viennoise/F.
viennoiserie/S.
vierge/S.
vierge/S.

vietnamienne/F.
vieux-lille
vif-argent
vif-argent
vigésimale/W.
vigie/S.
vigilamment
................................................................................
vorace/S.
voracement
voracité/S.
vortex
vorticelle/S.
vos
vosgienne/F.

votante/F.
votation/S.
vote/S.
voter/a0p+
voteuse/F.
votive/F.
votre
................................................................................
vouer/a0p+
vouge/S.
vouivre/S.
vouloir/S.
vouloir/pB
vous
vous
vous-même/S=

vousseau/X.
voussoiement/S.
voussoir/S.
voussoyer/a0p+
voussure/S.
voûtain/S.
voûte/S.
|







 







<


>







 







<







 







|
|
|
|







 







<







 







>







 







>







 







<







 







>







 







<







 







>







 







<







 







>







 







<







 







<







 







<







 







>







 







>







 







>







 







<







 







<












>







 







>







 







<







 







>







 







<
<







 







>
>







 







|
|







 







<







 







<
<


>
>







 







>







 







<


>







 







>







 







>







 







>







 







<

<


>
>







 







<

>







 







>







 







>







 







<







 







<







 







>







 







>
>







 







<







>







 







>







 







>







 







>







 







>
>
>







 







<









<







 







<







 







<






<

>
>
>
>
>







 







<


|

>







 







>







 







>







 







>







 







<
<
<




>
>
>







 







<







 







>







 







|
|







 







>
>







 







<

>







 







<







 







>







 







<







 







>







 







>
>
>







 







>
>







 







<


>





<


>







 







<







 







>







 







|
>







 







>
>







 







>







 







<


>







 







|
|
|







 







<







|
>
|








<







|
>
|













>







 







<








>







 







>







 







<

>







 







>







 







<







 







>







 







<
<


<



>
>
>







 







<







 







|
|







 







>







 







>








<







 







<






>





<







 







<






|






>







 







|
|











>
>







 







>







 







>







 







>







 







>







 







<







 







<







 







<







 







>







 







>







 







>
>
>
>







 







<







 







<







 







<







 







<







 







>







 







>







 







>







 







|
|







 







>
>
>
>
>







 







>







 







>







 







>







 







|
>







1
2
3
4
5
6
7
8
....
1186
1187
1188
1189
1190
1191
1192

1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
....
1498
1499
1500
1501
1502
1503
1504

1505
1506
1507
1508
1509
1510
1511
....
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
....
1986
1987
1988
1989
1990
1991
1992

1993
1994
1995
1996
1997
1998
1999
....
2334
2335
2336
2337
2338
2339
2340
2341
2342
2343
2344
2345
2346
2347
2348
....
2394
2395
2396
2397
2398
2399
2400
2401
2402
2403
2404
2405
2406
2407
2408
....
3159
3160
3161
3162
3163
3164
3165

3166
3167
3168
3169
3170
3171
3172
....
3472
3473
3474
3475
3476
3477
3478
3479
3480
3481
3482
3483
3484
3485
3486
....
3499
3500
3501
3502
3503
3504
3505

3506
3507
3508
3509
3510
3511
3512
....
3968
3969
3970
3971
3972
3973
3974
3975
3976
3977
3978
3979
3980
3981
3982
....
4240
4241
4242
4243
4244
4245
4246

4247
4248
4249
4250
4251
4252
4253
....
4311
4312
4313
4314
4315
4316
4317
4318
4319
4320
4321
4322
4323
4324
4325
....
4420
4421
4422
4423
4424
4425
4426

4427
4428
4429
4430
4431
4432
4433
....
4480
4481
4482
4483
4484
4485
4486

4487
4488
4489
4490
4491
4492
4493
....
4680
4681
4682
4683
4684
4685
4686

4687
4688
4689
4690
4691
4692
4693
....
4888
4889
4890
4891
4892
4893
4894
4895
4896
4897
4898
4899
4900
4901
4902
....
5063
5064
5065
5066
5067
5068
5069
5070
5071
5072
5073
5074
5075
5076
5077
....
5211
5212
5213
5214
5215
5216
5217
5218
5219
5220
5221
5222
5223
5224
5225
....
5317
5318
5319
5320
5321
5322
5323

5324
5325
5326
5327
5328
5329
5330
....
5520
5521
5522
5523
5524
5525
5526

5527
5528
5529
5530
5531
5532
5533
5534
5535
5536
5537
5538
5539
5540
5541
5542
5543
5544
5545
5546
....
5760
5761
5762
5763
5764
5765
5766
5767
5768
5769
5770
5771
5772
5773
5774
....
5822
5823
5824
5825
5826
5827
5828

5829
5830
5831
5832
5833
5834
5835
....
6348
6349
6350
6351
6352
6353
6354
6355
6356
6357
6358
6359
6360
6361
6362
....
6583
6584
6585
6586
6587
6588
6589


6590
6591
6592
6593
6594
6595
6596
....
6723
6724
6725
6726
6727
6728
6729
6730
6731
6732
6733
6734
6735
6736
6737
6738
....
6859
6860
6861
6862
6863
6864
6865
6866
6867
6868
6869
6870
6871
6872
6873
6874
....
7070
7071
7072
7073
7074
7075
7076

7077
7078
7079
7080
7081
7082
7083
....
7883
7884
7885
7886
7887
7888
7889


7890
7891
7892
7893
7894
7895
7896
7897
7898
7899
7900
....
8221
8222
8223
8224
8225
8226
8227
8228
8229
8230
8231
8232
8233
8234
8235
....
8387
8388
8389
8390
8391
8392
8393

8394
8395
8396
8397
8398
8399
8400
8401
8402
8403
.....
11797
11798
11799
11800
11801
11802
11803
11804
11805
11806
11807
11808
11809
11810
11811
.....
14298
14299
14300
14301
14302
14303
14304
14305
14306
14307
14308
14309
14310
14311
14312
.....
14718
14719
14720
14721
14722
14723
14724
14725
14726
14727
14728
14729
14730
14731
14732
.....
15012
15013
15014
15015
15016
15017
15018

15019

15020
15021
15022
15023
15024
15025
15026
15027
15028
15029
15030
.....
16481
16482
16483
16484
16485
16486
16487

16488
16489
16490
16491
16492
16493
16494
16495
16496
.....
17467
17468
17469
17470
17471
17472
17473
17474
17475
17476
17477
17478
17479
17480
17481
.....
20147
20148
20149
20150
20151
20152
20153
20154
20155
20156
20157
20158
20159
20160
20161
.....
20485
20486
20487
20488
20489
20490
20491

20492
20493
20494
20495
20496
20497
20498
.....
21691
21692
21693
21694
21695
21696
21697

21698
21699
21700
21701
21702
21703
21704
.....
21715
21716
21717
21718
21719
21720
21721
21722
21723
21724
21725
21726
21727
21728
21729
.....
26300
26301
26302
26303
26304
26305
26306
26307
26308
26309
26310
26311
26312
26313
26314
26315
.....
26463
26464
26465
26466
26467
26468
26469

26470
26471
26472
26473
26474
26475
26476
26477
26478
26479
26480
26481
26482
26483
26484
.....
27456
27457
27458
27459
27460
27461
27462
27463
27464
27465
27466
27467
27468
27469
27470
.....
28341
28342
28343
28344
28345
28346
28347
28348
28349
28350
28351
28352
28353
28354
28355
.....
28812
28813
28814
28815
28816
28817
28818
28819
28820
28821
28822
28823
28824
28825
28826
.....
31132
31133
31134
31135
31136
31137
31138
31139
31140
31141
31142
31143
31144
31145
31146
31147
31148
.....
33491
33492
33493
33494
33495
33496
33497

33498
33499
33500
33501
33502
33503
33504
33505
33506

33507
33508
33509
33510
33511
33512
33513
.....
33537
33538
33539
33540
33541
33542
33543

33544
33545
33546
33547
33548
33549
33550
.....
33595
33596
33597
33598
33599
33600
33601

33602
33603
33604
33605
33606
33607

33608
33609
33610
33611
33612
33613
33614
33615
33616
33617
33618
33619
33620
.....
36237
36238
36239
36240
36241
36242
36243

36244
36245
36246
36247
36248
36249
36250
36251
36252
36253
36254
36255
.....
42729
42730
42731
42732
42733
42734
42735
42736
42737
42738
42739
42740
42741
42742
42743
.....
42804
42805
42806
42807
42808
42809
42810
42811
42812
42813
42814
42815
42816
42817
42818
.....
44129
44130
44131
44132
44133
44134
44135
44136
44137
44138
44139
44140
44141
44142
44143
.....
46772
46773
46774
46775
46776
46777
46778



46779
46780
46781
46782
46783
46784
46785
46786
46787
46788
46789
46790
46791
46792
.....
47171
47172
47173
47174
47175
47176
47177

47178
47179
47180
47181
47182
47183
47184
.....
48008
48009
48010
48011
48012
48013
48014
48015
48016
48017
48018
48019
48020
48021
48022
.....
48830
48831
48832
48833
48834
48835
48836
48837
48838
48839
48840
48841
48842
48843
48844
48845
.....
49171
49172
49173
49174
49175
49176
49177
49178
49179
49180
49181
49182
49183
49184
49185
49186
.....
49821
49822
49823
49824
49825
49826
49827

49828
49829
49830
49831
49832
49833
49834
49835
49836
.....
50456
50457
50458
50459
50460
50461
50462

50463
50464
50465
50466
50467
50468
50469
.....
50647
50648
50649
50650
50651
50652
50653
50654
50655
50656
50657
50658
50659
50660
50661
.....
51001
51002
51003
51004
51005
51006
51007

51008
51009
51010
51011
51012
51013
51014
.....
51246
51247
51248
51249
51250
51251
51252
51253
51254
51255
51256
51257
51258
51259
51260
.....
53764
53765
53766
53767
53768
53769
53770
53771
53772
53773
53774
53775
53776
53777
53778
53779
53780
.....
54413
54414
54415
54416
54417
54418
54419
54420
54421
54422
54423
54424
54425
54426
54427
54428
.....
54610
54611
54612
54613
54614
54615
54616

54617
54618
54619
54620
54621
54622
54623
54624

54625
54626
54627
54628
54629
54630
54631
54632
54633
54634
.....
54658
54659
54660
54661
54662
54663
54664

54665
54666
54667
54668
54669
54670
54671
.....
55005
55006
55007
55008
55009
55010
55011
55012
55013
55014
55015
55016
55017
55018
55019
.....
55062
55063
55064
55065
55066
55067
55068
55069
55070
55071
55072
55073
55074
55075
55076
55077
.....
55255
55256
55257
55258
55259
55260
55261
55262
55263
55264
55265
55266
55267
55268
55269
55270
.....
56208
56209
56210
56211
56212
56213
56214
56215
56216
56217
56218
56219
56220
56221
56222
.....
57992
57993
57994
57995
57996
57997
57998

57999
58000
58001
58002
58003
58004
58005
58006
58007
58008
.....
59298
59299
59300
59301
59302
59303
59304
59305
59306
59307
59308
59309
59310
59311
59312
59313
59314
.....
60095
60096
60097
60098
60099
60100
60101

60102
60103
60104
60105
60106
60107
60108
60109
60110
60111
60112
60113
60114
60115
60116
60117
60118
60119

60120
60121
60122
60123
60124
60125
60126
60127
60128
60129
60130
60131
60132
60133
60134
60135
60136
60137
60138
60139
60140
60141
60142
60143
60144
60145
60146
60147
60148
60149
60150
.....
60319
60320
60321
60322
60323
60324
60325

60326
60327
60328
60329
60330
60331
60332
60333
60334
60335
60336
60337
60338
60339
60340
60341
.....
61540
61541
61542
61543
61544
61545
61546
61547
61548
61549
61550
61551
61552
61553
61554
.....
61596
61597
61598
61599
61600
61601
61602

61603
61604
61605
61606
61607
61608
61609
61610
61611
.....
62669
62670
62671
62672
62673
62674
62675
62676
62677
62678
62679
62680
62681
62682
62683
.....
62833
62834
62835
62836
62837
62838
62839

62840
62841
62842
62843
62844
62845
62846
.....
62878
62879
62880
62881
62882
62883
62884
62885
62886
62887
62888
62889
62890
62891
62892
.....
63572
63573
63574
63575
63576
63577
63578


63579
63580

63581
63582
63583
63584
63585
63586
63587
63588
63589
63590
63591
63592
63593
.....
63809
63810
63811
63812
63813
63814
63815

63816
63817
63818
63819
63820
63821
63822
.....
63895
63896
63897
63898
63899
63900
63901
63902
63903
63904
63905
63906
63907
63908
63909
63910
.....
64052
64053
64054
64055
64056
64057
64058
64059
64060
64061
64062
64063
64064
64065
64066
.....
64170
64171
64172
64173
64174
64175
64176
64177
64178
64179
64180
64181
64182
64183
64184
64185

64186
64187
64188
64189
64190
64191
64192
.....
64440
64441
64442
64443
64444
64445
64446

64447
64448
64449
64450
64451
64452
64453
64454
64455
64456
64457
64458

64459
64460
64461
64462
64463
64464
64465
.....
64487
64488
64489
64490
64491
64492
64493

64494
64495
64496
64497
64498
64499
64500
64501
64502
64503
64504
64505
64506
64507
64508
64509
64510
64511
64512
64513
64514
.....
64601
64602
64603
64604
64605
64606
64607
64608
64609
64610
64611
64612
64613
64614
64615
64616
64617
64618
64619
64620
64621
64622
64623
64624
64625
64626
64627
64628
64629
.....
65347
65348
65349
65350
65351
65352
65353
65354
65355
65356
65357
65358
65359
65360
65361
.....
65667
65668
65669
65670
65671
65672
65673
65674
65675
65676
65677
65678
65679
65680
65681
.....
66016
66017
66018
66019
66020
66021
66022
66023
66024
66025
66026
66027
66028
66029
66030
.....
67324
67325
67326
67327
67328
67329
67330
67331
67332
67333
67334
67335
67336
67337
67338
.....
68818
68819
68820
68821
68822
68823
68824

68825
68826
68827
68828
68829
68830
68831
.....
68859
68860
68861
68862
68863
68864
68865

68866
68867
68868
68869
68870
68871
68872
.....
68970
68971
68972
68973
68974
68975
68976

68977
68978
68979
68980
68981
68982
68983
.....
73748
73749
73750
73751
73752
73753
73754
73755
73756
73757
73758
73759
73760
73761
73762
.....
73864
73865
73866
73867
73868
73869
73870
73871
73872
73873
73874
73875
73876
73877
73878
.....
74269
74270
74271
74272
74273
74274
74275
74276
74277
74278
74279
74280
74281
74282
74283
74284
74285
74286
.....
75146
75147
75148
75149
75150
75151
75152

75153
75154
75155
75156
75157
75158
75159
.....
75204
75205
75206
75207
75208
75209
75210

75211
75212
75213
75214
75215
75216
75217
.....
75988
75989
75990
75991
75992
75993
75994

75995
75996
75997
75998
75999
76000
76001
.....
76271
76272
76273
76274
76275
76276
76277

76278
76279
76280
76281
76282
76283
76284
.....
76952
76953
76954
76955
76956
76957
76958
76959
76960
76961
76962
76963
76964
76965
76966
.....
77598
77599
77600
77601
77602
77603
77604
77605
77606
77607
77608
77609
77610
77611
77612
.....
77659
77660
77661
77662
77663
77664
77665
77666
77667
77668
77669
77670
77671
77672
77673
.....
77893
77894
77895
77896
77897
77898
77899
77900
77901
77902
77903
77904
77905
77906
77907
77908
.....
78148
78149
78150
78151
78152
78153
78154
78155
78156
78157
78158
78159
78160
78161
78162
78163
78164
78165
78166
.....
79551
79552
79553
79554
79555
79556
79557
79558
79559
79560
79561
79562
79563
79564
79565
.....
79594
79595
79596
79597
79598
79599
79600
79601
79602
79603
79604
79605
79606
79607
79608
.....
80144
80145
80146
80147
80148
80149
80150
80151
80152
80153
80154
80155
80156
80157
80158
.....
80160
80161
80162
80163
80164
80165
80166
80167
80168
80169
80170
80171
80172
80173
80174
80175
80867
&
1er/--
1ers/--
1re/--
1res/--
1ʳᵉ/--
1ʳᵉˢ/--
................................................................................
Bradley
Bradley
Brafman
Brahim
Brahma
Brahmapoutre
Brahms

Braine-le-Château
Braine-le-Comte
Braine-l'Alleud
Brakel
Brand
Brandon
Brasov
Brassac
Brasschaat
Brassica
................................................................................
Casanova
Casey
Casimir
Casimir-Perier
Caspienne
Cassandra
Cassandre

Cassidy
Cassini
Cassiopée
Castafolte
Castanet-Tolosan
Castelnaudary
Castelnau-le-Lez
................................................................................
Charybde
Chase
Chasles
Chastel-Arnaud
Château-Gontier
Château-Thierry
Châteaubriant
Châteaudouble
Châteaudun
Château-d'Œx
Château-d'Olonne
Châteauguay
Châteauneuf-du-Pape
Châteauneuf-les-Martigues
Châteaurenard
Châteauroux
Châtelain
Châtelet
................................................................................
DEUG
DFSG
DG
DGSE
DGSI
DHCP
DHEA

DJ
DM
DNS
DOM
DOM-TOM
DPTH
DREES
................................................................................
Dynkin
Dysnomie
Dʳ
Dʳˢ
Dʳᵉ
Dʳᵉˢ
Dᴏꜱꜱᴍᴀɴɴ
D'Holbach
ECS/L'D'Q'
EDF/L'D'Q'
EEPROM/L'D'Q'
EFREI/L'D'Q'
EFS/L'D'Q'
EIB/L'D'Q'
ENA/L'D'Q'
................................................................................
Eeklo/L'D'Q'
Eeyou/L'
Effinergie
Égée/L'D'Q'
Éghezée/L'D'Q'
Églantine/L'D'Q'
Égypte/L'D'
Ehlers-Danlos
Ehrenpreis/L'D'Q'
Ehresmann/L'D'Q'
Eibit/||--
Eiffel/L'D'Q'
Eileen/L'D'Q'
Eilenberg/L'D'Q'
Eilleen/L'D'Q'
................................................................................
Goebbels
Goëmar
Goethe
Gogh
Gogol
Golan
Goldbach

Goldoni
Golgi
Golgotha
Goliath
Gomorrhe
Goncourt
Gondwana
................................................................................
Helvétie/L'D'
Hem/L'D'Q'
Hemiksem/L'D'Q'
Hemingway/L'D'Q'
Henan
Hénault
Hendaye/L'D'Q'
Hendrik/L'D'Q'
Hénin-Beaumont/L'D'Q'
Hennebont/L'D'Q'
Hénoch/L'D'Q'
Henri/L'D'Q'
Henriette/L'D'Q'
Henrique/L'D'Q'
Henry
................................................................................
Héricourt/L'D'Q'
Hermann/L'D'Q'
Hermès/L'D'Q'
Hermine/L'D'Q'
Hermione/L'D'Q'
Hermite/L'D'Q'
Hernando/L'D'Q'

Hérode/L'D'Q'
Hérodote/L'D'Q'
Hérouville-Saint-Clair/L'D'Q'
Herschel/L'D'Q'
Herselt/L'D'Q'
Herstal
Hertz
................................................................................
Joanna
Joannie
Joaquim
Jocelyn
Jocelyne
Joconde
Jocrisse
Jodhpur
Jodie
Jodoigne
Jody
Joe
Joël
Joëlle
Joey
................................................................................
Kjeldahl
Klaus
Klee
Klein
Klimt
Klitzing
Klondike

Knokke-Heist
Knossos
Ko/||--
Kobe
Koch
Kodaira
Koekelberg
................................................................................
Kuurne
Kyle
Kylian
Kylie
Kyllian
Kyoto
Kyushu
K'nex
L/U.||--
LCD
LED
LGBT
LGBTI
LGBTIQ
LGV
................................................................................
Laval
Lavaur
Laveran
Lavoisier
Lawrence
Laxou
Lazare

Léa
Leah
Léandre
Léane
Lebbeke
Lebesgue
Lebrun
................................................................................
Léonore
Léontine
Léopold
Léopoldine
Leopoldt
Léopoldville
Leroy

Lesage
Lesbos
Lesieur
Lesley
Leslie
Lesneven
Lesotho
................................................................................
Louvain-la-Neuve
Louvière
Louviers
Louvre
Love
Lovecraft
Lovelace

Lovćen
Loyola
Loyre
Lozère
Luanda
Lubbeek
Lübeck
................................................................................
Mammon
Manach
Managua
Manama
Manaus
Manche
Manchester
Mandalay
Mandchourie
Mandela
Mandelbrot
Mandelieu-la-Napoule
Mandor
Mandy
Manet
................................................................................
Maslow
Mason
Massachusetts
Masséna
Massenet
Massimo
Massy
MasterCard
Masutti
Matchstick
Mateo
Mathéo
Matheron
Matheson
Mathias
................................................................................
Mérimée
Merkel
Merleau-Ponty
Merlin
Méru
Meryl
Mésie
Mésoamérique
Mésopotamie
Messaline
Messer
Messine
Météo-France
Mettet
Metz
................................................................................
Mithra
Mithridate
Mitnick
Mitry-Mory
Mitsubishi
Mittelhausbergen
Mitterrand

Miyabi
Mlle/S.
Mme/S.
Mnémosyne
Mo/||--
Moab
Möbius
................................................................................
Mᵍʳˢ
Mᵐᵉ
Mᵐᵉˢ
N/U.||--
NASA
NDLR
NDT

NEC
NF
NIRS
NSA
Nabil
Nabuchodonosor
Nacira
Nadège
Nadia
Nadim
Nadine
Nadir
Nadja
Nagasaki
Nagata
Nagoya
Nagy
Nahum
Naimark
Nairobi
................................................................................
Nusselt
Nuuk
Nvidia
Nyarlathotep
Nyons
Nyquist
Nyx
N'Djamena
OCDE/L'D'Q'
OCaml/L'D'Q'
ODF/L'D'Q'
Œdipe/L'D'Q'
OFBiz/D'Q'
OFCE/L'D'Q'
OGM/L'D'Q'
................................................................................
Oignies/L'D'Q'
Oisans/L'
Oise/L'
Oissel/L'D'Q'
Oklahoma/L'D'
Olaf/L'D'Q'
Oldham/L'D'Q'

Oleg/L'D'Q'
Olen/L'D'Q'
Oléron/L'D'Q'
Olga/L'D'Q'
Oliver/L'D'Q'
Olivet/L'D'Q'
Olivia/L'D'Q'
................................................................................
Pullman
Pune
Purcell
Puteaux
Puurs
Puy-de-Dôme
Puy-en-Velay
Pygmalion
Pyongyang
Pyrénées
Pyrénées-Atlantiques
Pyrénées-Orientales
Pyrrha
Pyrrhus
Pythagore
................................................................................
Rivery
Riviera
Rivière-Pilote
Rivière-Salée
Rixensart
Rixheim
Riyad


Roanne
Rob
Robert
Roberta
Roberte
Roberto
Roberval
................................................................................
Ruth
Rutherford
Rutishauser
Rwanda
Ryan
Ryanair
Ryxeo
R'lyeh
R'n'B
S/U.||--
SA
SADT
SAP
SARL
SCIC
SCOT
................................................................................
Saint-Louis
Saint-Malo
Saint-Mandé
Saint-Marin
Saint-Martin
Saint-Martin-Boulogne
Saint-Martin-Petit
Saint-Martin-de-Crau
Saint-Martin-d'Hères
Saint-Maur-des-Fossés
Saint-Maurice
Saint-Max
Saint-Maximin-la-Sainte-Baume
Saint-Médard-en-Jalles
Saint-Michel-de-Feins
Saint-Michel-sur-Orge
................................................................................
Schwerin
Schwytz
Schwyz
Scipion
Scott
Scoville
Scrameustache/S.

Scudéry
Scylla
SeaMonkey
Seagate
Seamus
Sean
Seat
................................................................................
Vachez
Vadim
Vaduz
Vahan
Vaires-sur-Marne
Valais
Valbonne


Val-de-Marne
Val-de-Reuil
Val-d'Oise
Val-d'Or
Valence
Valenciennes
Valentigney
Valentin
Valentina
Valentine
Valentinien
................................................................................
Wilfred
Wilfrid
Wilfried
Wilhelm
Will
Willa
Willebroek
Willem
William
Williams
Willie
Willy
Wilma
Wilson
Windhoek
................................................................................
Xavière
Xe/--
Xebia
Xenia
Xénophane
Xénophon
Xerxès

Xining
Xinjiang
Xi'an
Xᵉ/--
YHWH
Yacine
Yaël
Yaëlle
Yahvé
Yahweh
................................................................................
annulable/S*
annulaire/S*
annulation/S*
annulative/F*
annulatrice/F*
annulement/S*
annuler/a4p+
annulingus/L'D'Q'
annuus
anoblir/f4p+
anoblissante/F*
anoblissement/S*
anode/S*
anodine/F*
anodique/S*
................................................................................
autoclave/S*
autoclave/S*
autoclaviste/S*
autocollante/F*
autocommutateur/S*
autocompenser/a4p+
auto-compenser/a4p+
autocomplétion/S*
autoconcurrence/S*
autoconditionnement/S*
auto-conditionnement/S*
autoconduction/S*
autoconservation/S*
autoconsommation/S*
autoconstruction/S*
................................................................................
avionneuse/F*
avions-cargos/D'Q'
avipelvien/S*
aviron/S*
avirulence/S*
avis/L'D'Q'
aviser/a4p+
aviseur/S*
aviso/S*
avitaillement/S*
avitailler/a4p+
avitailleuse/F*
avitaminose/S*
avivage/S*
avivement/S*
................................................................................
baguenauder/a0p+
baguenaudier/S.
baguer/a0p+
baguette/S.
baguier/S.
baguiste/S.
bah

bahaïe/F.

bahaïsme/S.
bahamienne/F.
baha'ie/F.
baha'isme/S.
bahreïnie/F.
baht/S.
bahut/S.
bahutage/S.
baie/F.
baignade/S.
baigner/a0p+
................................................................................
binocle/S.
binoculaire/S.
binodale/S.
binôme/S.
binomiale/W.
binominale/W.
binouze/S.

bintje/S.
bin's
bio
bio/S.
bioabsorbable/S.
bioaccumulable/S.
bioaccumulation/S.
bioacoustique/S.
bioagresseur/S.
................................................................................
boui-boui
bouif/S.
bouillabaisse/S.
bouillage/S.
bouillante/F.
bouillasse/S.
bouille/S.
bouillette/S.
bouilleuse/F.
bouillie/S.
bouillir/iQ
bouillissage/S.
bouilloire/S.
bouillon/S.
bouillonnante/F.
................................................................................
caviarder/a0p+
cavicorne/S.
caviste/S.
cavitaire/S.
cavitation/S.
cavité/S.
cd/U.||--
ce
ce
céans
cébette/S.
cébiste/S.
ceci
cécidie/S.
cécidomyie/S.
................................................................................
cesse
cesser/a0p+
cessez-le-feu
cessibilité/S.
cessible/S.
cession/S.
cessionnaire/S.

ceste/S.
cestode/S.
césure/S.
cet
cétacé/S.
cétane/S.
céteau/X.
................................................................................
chrysostome/S.
chrysothérapie/S.
chrysotile/S.
chtarbée/F.
chthonienne/F.
chti/S.
chtimi/S.

chtouille/S.
chuchotage/S.
chuchotante/F.
chuchotement/S.
chuchoter/a0p+
chuchoterie/S.
chuchoteuse/F.
................................................................................
chutney/S.
chyle/S.
chylifère/S.
chyme/S.
chymotrypsine/S.
chyprée/F.
chypriote/S.
ch'timi/S.
ci
ciabatta/S.
ciao
ci-après
ci-avant
cibiche/S.
cibiste/S.
................................................................................
cytosquelette/S.
cytostatique/S.
cytotoxicité/S.
cytotoxique/S.
czar/S.
czardas
czimbalum/S.
c'
c'est-à-dire
d
d/||--
dB/||--
daba/S.
dacite/S.
dacryoadénite/S.
dacryocystite/S.
................................................................................
datte/S.
dattier/S.
datura/S.
daube/S.
dauber/a0p+
daubeuse/F.
daubière/S.

dauphine/F.
dauphinelle/S.
dauphinoise/F.
daurade/S.
davantage
davier/S.
dazibao/S.
de
de
dé/S.
déactiver/a0p+
deal/S.
dealer/S.
dealer/a0p+
déambulateur/S.
................................................................................
dégeler/b0p+
dégénération/S.
dégénérative/F.
dégénérée/F.
dégénérer/c0p+
dégénérescence/S.
dégénérescente/F.
dégenrer/a0p+
dégerbage/S.
dégermage/S.
dégermer/a0p+
dégingander/a0p+
dégîter/a0p+
dégivrage/S.
dégivrante/F.
................................................................................
dépoitrailler/a0p+
dépolarisation/S.
dépolariser/a0p+
dépolir/f0p+
dépolissage/S.
dépolitisation/S.
dépolitiser/a0p+
dépolluante/F.
dépolluer/a0p+
dépollution/S.
dépolymérisation/S.
dépolymériser/a0p+
déponente/F.
dépontiller/a0p.
dépopulation/S.
................................................................................
désensibilisation/S.
désensibiliser/a0p+
désensorceler/d0p+
désentoilage/S.
désentoiler/a0p+
désentortiller/a0p+
désentraver/a0p+
désentrelacer/a4p+
désenvasement/S.
désenvaser/a0p+
désenvelopper/a0p+
désenvenimer/a0p+
désenverguer/a0p+
désenvoûtement/S.
désenvoûter/a0p+
................................................................................
dystrophine/S.
dystrophique/S.
dystrophisation/S.
dysurie/S.
dysurique/S.
dytique/S.
dzêta
d'
d'
d'aucuns
e
eV/U.||--
eau/X*
eau-de-vie/L'D'Q'
eau-forte/L'D'Q'
eaux-de-vie/D'Q'
eaux-fortes/D'Q'
................................................................................
entraccorder/a6p+
entraccuser/a6p+
entracte/S*
entradmirer/a6p+
entraide/S*
entraider/a6p+
entrailles/D'Q'

entrain/S*
entraînable/S*
entraînante/F*
entraînement/S*
entraîner/a4p+
entraîneuse/F*
entrait/S*
entrante/F*
entrapercevoir/pK

entrave/S*
entraver/a2p+
entravon/S*
entraxe/S*
entre/D'Q'Qj
entre-axes/L'D'Q'
entrebâillement/S*
................................................................................
entrées-sorties
entrefaite/S*
entrefaites
entrefer/S*
entrefilet/S*
entre-frapper/a6p+
entregent/S*

entre-haïr/fB
entre-heurter/a6p+
entrejambe/S*
entre-jambe/S*
entrelacement/S*
entrelacer/a4p+
entrelacs/L'D'Q'
................................................................................
entre-tuer/a6p+
entrevoie/S*
entre-voie/S*
entrevoir/pF
entrevous/L'D'Q'
entrevoûter/a2p+
entrevue/S*

entrisme/S*
entropie/S*
entropion/S*
entropique/S*
entroque/S*
entrouvrir/iC

entrure/S*
entr'aimer/a6p+
entr'apercevoir/pK
entr'égorger/a6p+
entr'hiverner
entr'ouvrir/iC
entuber/a2p+
enturbanner/a4p+
enture/S*
énucléation/S*
énucléer/a2p+
énumérabilité/S*
énumérable/S*
................................................................................
fécule/S.
féculence/S.
féculent/S.
féculente/F.
féculer/a0p+
féculerie/S.
féculière/F.

fedayin
fedayin/S.
feda'i
feddayin
fedda'i
fédérale/W.
fédéralisation/S.
fédéraliser/a0p+
fédéralisme/S.
fédéraliste/S.
fédération/S.
fédérative/F.
................................................................................
hypercentre/S*
hyperchimie/S*
hyperchlorhydrie/S*
hypercholestérolémie/S*
hypercholestérolémique/S*
hyperchrome/S*
hyperchromie/S*
hypercoagulabilité/S*
hypercomplexe/S*
hyperconformisme/S*
hyperconnectée/F*
hypercontinentale/W*
hypercontrôle/S*
hypercorrecte/F*
hypercorrection/S*
................................................................................
hypernova/L'D'Q'
hypernovæ/D'Q'
hypéron/S*
hyperonyme/S*
hyperonymie/S*
hyperonymique/S*
hyperostose/S*
hyperoxie/S*
hyperparasite/S*
hyperparathyroïdie/S*
hyperphagie/S*
hyperphagique/S*
hyperphalangie/S*
hyperplan/S*
hyperplasie/S*
................................................................................
incrémentale/W*
incrémentalement/D'Q'
incrémentation/S*
incrémenter/a2p+
incrémentielle/F*
increvable/S*
incriminable/S*
incriminante/F*
incrimination/S*
incriminer/a4p+
incristallisable/S*
incritiquable/S*
incrochetable/S*
incroyable/S*
incroyablement/D'Q'
................................................................................
juron/S.
jury/S.
jus
jusant/S.
jusée/S.
jusnaturalisme/S.
jusnaturaliste/S.



jusque
jusque-là
jusques
jusquiame/S.
jusqu'/--
jusqu'au-boutisme/S.
jusqu'au-boutiste/S.
jussiée/S.
jussion/S.
justaucorps
juste
juste/S.
juste-à-temps
justement
................................................................................
kyrie
kyrielle/S.
kyriologique/S.
kyste/S.
kystique/S.
kyu/S.
kyudo/S.

l
l/U.||--
là
la
la
la
labadens
................................................................................
leude/S.
leur
leur
leur/S.
leurre/S.
leurrer/a0p+
leurs
leurszigues
lev/S.
levage/S.
levageuse/F.
levain/S.
levalloisien/S.
levalloisienne/F.
lévamisole/S.
................................................................................
loricaire/S.
lorientaise/F.
loriot/S.
loriquet/S.
lorraine/F.
lorry/A.
lors
lorsque
lorsqu'/--
losange/S.
losangée/F.
losangique/S.
loser/S.
lot/S.
loterie/S.
lotier/S.
................................................................................
lysogénie/S.
lysogénique/S.
lysosomale/W.
lysosome/S.
lysosomiale/W.
lysozyme/S.
lytique/S.
l'
l'
m
m/U.||--
mCE
mR/||--
ma
maar/S.
maboule/F.
................................................................................
mammite/S.
mammographe/S.
mammographie/S.
mammoplastie/S.
mammouth/S.
mammy/S.
mamours

mamy/S.
mam'selle/S.
mam'zelle/S.
man/S.
mana/S.
manade/S.
manadière/F.
management/S.
manager/S.
................................................................................
mastopathie/S.
mastose/S.
mastroquet/S.
masturbation/S.
masturbatoire/S.
masturbatrice/F.
masturber/a0p+

masure/S.
masurium/S.
mât/S.
matabiche/S.
matabicher/a0p+
matador/S.
mataf/S.
................................................................................
mazouter/a0p+
mazurka/S.
mbalax
mbar/||--
me
mea-culpa
méandre/S.
méandreuse/W.
méandriforme/S.
méandrine/S.
méat/S.
méatoscopie/S.
mébibit/S.
mébioctet/S.
mec/S.
................................................................................
mémoration/S.
mémorial/X.
mémorialiste/S.
mémorielle/F.
mémorisable/S.
mémorisation/S.
mémoriser/a0p+

menaçante/F.
menace/S.
menacer/a0p+
ménade/S.
ménage/S.
ménageable/S.
ménagement/S.
................................................................................
mésestimer/a0p+
mésiale/W.
mésintelligence/S.
mésinterprétation/S.
mésique/S.
mesmérienne/F.
mesmérisme/S.
mésoaméricaine/F.
mésoblaste/S.
mésoblastique/S.
mésocarpe/S.
mésocentre/S.
mésocéphale/S.
mésocéphale/S.
mésocéphalique/S.
................................................................................
myxœdème/S.
myxomatose/S.
myxome/S.
myxomycète/S.
myxovirus
m²
m³
m'
m'as-tu-vu
m'en
n
na
naan/S.
nabab/S.
nabatéenne/F.
nabi/S.
nabisme/S.
................................................................................
neurinome/S.
neuroanatomie/S.
neuro-anatomie/S.
neuroanatomique/S.
neuro-anatomique/S.
neuroanatomiste/S.
neuro-anatomiste/S.
neuroatypique/S.
neuro-atypique/S.
neurobiochimie/S.
neurobiochimique/S.
neurobiochimiste/S.
neurobiologie/S.
neurobiologique/S.
neurobiologiste/S.
neuroblaste/S.
................................................................................
nicotiniser/a0p+
nicotinisme/S.
nictation/S.
nictitante/F.
nictitation/S.
nid/S.
nidation/S.

nid-de-pie
nid-de-poule
nid-d'abeilles
nidicole/S.
nidification/S.
nidificatrice/F.
nidifier/a0p.
nidifuge/S.

nids-de-pie
nids-de-poule
nids-d'abeilles
nièce/S.
niellage/S.
nielle/S.
nieller/a0p+
nielleur/S.
niellure/S.
nier/a0p+
................................................................................
nilvariété/S.
nimbe/S.
nimber/a0p+
nimbostratus
nimbo-stratus
nimbus
nîmoise/F.

ninas
ninja/S.
ninjato/S.
niobate/S.
niobite/S.
niobium/S.
niôle/S.
................................................................................
nosologie/S.
nosologique/S.
nosophobie/S.
nostalgie/S.
nostalgique/S.
nostalgiquement
nostoc/S.
noszigues
notabilité/S.
notable/S.
notable/S.
notablement
notaire/S.
notairesse/S.
notamment
................................................................................
nourrissante/F.
nourrissement/S.
nourrisseur/S.
nourrisson/S.
nourriture/S.
nous
nous
nous-même
nous-mêmes
nouure/S.
nouveau-née/F.
nouveauté/S.
nouvel
nouvelle/W.
nouvellement
nouvelleté/S.
................................................................................
nymphette/S.
nympho/S.
nymphomane/S.
nymphomanie/S.
nymphoplastie/S.
nymphose/S.
nystagmus
n'
n'importe
ô
o
o/||--
oaï/S*
oaristys/L'D'Q'
oasienne/F*
oasis/L'D'Q'
................................................................................
orchestration/S*
orchestratrice/F*
orchestre/S*
orchestrer/a2p+
orchidacée/S*
orchidacée/S*
orchidée/S*
orchidologie/S*
orchi-épididymite/S*
orchis/L'D'Q'
orchite/S*
ordalie/S*
ordalique/S*
ordi/S*
ordinaire/S*
................................................................................
pas-à-pas
pas-à-pas
pascal/Um
pascale/F.
pascalienne/F.
pascaline/S.
pascaux

pas-de-géant
pas-de-porte
pas-d'âne
paseo/S.
pashmina/S.
pasionaria/S.
paso-doble
pasquin/S.
pasquinade/S.
passable/S.
................................................................................
pétiole/S.
pétiolée/F.
petiote/F.
petit-beurre
petit-bois
petit-bourgeois
petit-boutiste/S.
petit-déjeuner
petit-déjeuner/a0p.
petit-déj'
petite/F.
petite-bourgeoise
petite-fille
petite-maîtresse
petitement
petite-nièce
petites-bourgeoises
................................................................................
pie
pie/S.
pièce/S.
piécette/S.
pied/S.
pied-à-terre
pied-bot

pied-de-biche
pied-de-cheval
pied-de-chèvre
pied-de-loup
pied-de-mouton
pied-de-poule
pied-de-veau
pied-droit
pied-d'alouette
pied-d'oiseau
piédestal/X.
pied-fort
piedmont/S.
pied-noir
piédouche/S.
pied-plat
piédroit/S.
pieds-bots

pieds-de-biche
pieds-de-cheval
pieds-de-chèvre
pieds-de-loup
pieds-de-mouton
pieds-de-poule
pieds-de-veau
pieds-droits
pieds-d'alouette
pieds-d'oiseau
pieds-forts
pieds-noirs
pieds-plats
piéfort/S.
piège/S.
piégeable/S.
piégeage/S.
piéger/c0p+
piégeuse/F.
piégeuse/W.
pie-grièche
piémont/S.
piémontaise/F.
pier/S.
piercing/S.
piéride/S.
pierrade/S.
pierrage/S.
pierraille/S.
pierre/S.
pierrer/a0p+
................................................................................
pinière/S.
pinne/S.
pinnipède/S.
pinnothère/S.
pinnule/S.
pinocytose/S.
pinot/S.

pinson/S.
pintade/S.
pintadeau/X.
pintadine/S.
pinte/S.
pinter/a0p+
pin-up
pinyin
pin's
piochage/S.
pioche/S.
piochement/S.
piocher/a0p+
piocheuse/F.
pioger/a0p.
piolet/S.
................................................................................
pompeuse/F.
pompeuse/W.
pompeusement
pompière/F.
pompiérisme/S.
pompile/S.
pompiste/S.
pom-pom
pompon/S.
pomponner/a0p+
ponant/S.
ponantaise/F.
ponçage/S.
ponce/S.
ponce/S.
................................................................................
pontifiante/F.
pontificale/W.
pontificalement
pontificat/S.
pontifier/a0p.
pontil/S.
pontiller/a0p+

pont-levis
pont-l'évêque
ponton/S.
pontonnier/S.
pont-promenade
ponts-levis
ponts-promenades
pontuseau/X.
pool/S.
................................................................................
président-directeur
présidente/F.
présidente-directrice
présidentes-directrices
présidentiable/S.
présidentialisation/S.
présidentialisme/S.
présidentialiste/S.
présidentielle/F.
présidents-directeurs
présider/a0p+
présidial/X.
présidiale/W.
présidialité/S.
présidium/S.
................................................................................
priapisme/S.
prie-Dieu
prier/a0p+
prière/S.
prieure/F.
prieuré/S.
prieuse/S.

prima-donna/I.
primage/S.
primaire/S.
primairement
primale/W.
primalité/S.
primarisation/S.
................................................................................
primo-délinquante/F.
primogéniture/S.
primo-infection/S.
primordiale/W.
primordialement
primordialité/S.
primulacée/S.
prim'Holstein
prince-de-galles
prince-de-galles
princeps
princeps
princesse/F.
princière/F.
princièrement
................................................................................
pruche/S.
prude/S.
prudemment
prudence/S.
prudente/F.
prudentielle/F.
pruderie/S.


prudhommale/W.
prudhomme/S.

prudhommerie/S.
prudhommesque/S.
prudhommie/S.
prud'homale/W.
prud'homie/S.
prud'homme/S.
pruine/S.
prune
prune/S.
pruneau/X.
prunelaie/S.
prunelée/S.
prunelle/S.
................................................................................
ptérosaure/S.
ptérosaurien/S.
ptérygion/S.
ptérygoïde/S.
ptérygoïdienne/F.
ptérygote/S.
ptérygotus

ptolémaïque/S.
ptoléméenne/F.
ptomaïne/S.
ptôse/S.
ptosis
ptôsis
ptyaline/S.
................................................................................
puis
puisage/S.
puisard/S.
puisatier/S.
puisement/S.
puiser/a0p+
puisette/S.
puisque
puisqu'/--
puissamment
puissance/S.
puissante/F.
puits
pulicaire/S.
pulicaire/S.
pull/S.
................................................................................
pycnogonide/S.
pycnomètre/S.
pycnose/S.
pycnotique/S.
pyélite/S.
pyélonéphrite/S.
pygargue/S.
pygmalionisme/S.
pygmée/S.
pygméenne/F.
pyjama/S.
pylône/S.
pylore/S.
pylorique/S.
pyocyanique/S.
................................................................................
pythique/S.
pythique/S.
python/S.
pythonisse/S.
pyurie/S.
pyxide/S.
pz/||--
p'tite/F.
q
qPCR
qanat/S.
qat/S.
qatarie/F.
qatarienne/F.
qbit/S.
qi

qua
quad/S.
quadra
quadra/S.
quadragénaire/S.
quadragésimale/W.
quadragésime/S.
................................................................................
québécoise/F.
quebracho/S.
quechua/S.
queen/S.
queer/S.
quelconque/S.
quelle/F.

quelque
quelque/S.
quelque/S.
quelquefois
quelques-unes
quelques-uns
quelqu'/--
quelqu'un
quelqu'une
quémande/S.
quémander/a0p+
quémandeuse/F.

quenelle/S.
quenotte/S.
quenouille/S.
quenouillée/S.
quenouillette/S.
quéquette/S.
quérable/S.
................................................................................
quêteuse/F.
quetsche/S.
quetschier/S.
quetter/a0p+
quetzal/S.
quetzales
queue/S.

queue-de-cheval
queue-de-cochon
queue-de-morue
queue-de-pie
queue-de-rat
queue-de-renard
queue-d'aronde
queues-de-cheval
queues-de-cochon
queues-de-morue
queues-de-pie
queues-de-rat
queues-de-renard
queues-d'aronde
queuillère/S.
queursage/S.
queusot/S.
queutarde/F.
queuter/a0p+
queux
qui
................................................................................
quiz
quizalofop/S.
quo
quoailler
quoc-ngu
quôc-ngu
quoi
quoique
quoiqu'/--
quolibet/S.
quorum/S.
quota/S.
quote-part
quotes-parts
quotidienne/F.
quotidiennement
quotidienneté/S.
quotient/S.
quotité/S.
quotter/a0p.
qu'/--
qu'en-dira-t-on
qwerty
r
ra
rab/S.
rabâchage/S.
rabâchement/S.
rabâcher/a0p+
................................................................................
ratichon/S.
raticide/S.
raticide/S.
ratière/F.
ratification/S.
ratifier/a0p+
ratinage/S.
ratine/S.
ratiner/a0p+
rating/S.
ratio/S.
ratiocinante/F.
ratiocination/S.
ratiociner/a0p.
ratiocineuse/F.
................................................................................
recéder/c0p+
recel/S.
recèlement/S.
receler/b0p+
receleuse/F.
récemment
récence/S.
recensable/S.
recensement/S.
recenser/a0p+
recenseuse/F.
recension/S.
récente/F.
recentrage/S.
recentrement/S.
................................................................................
récuser/a0p+
recyclabilité/S.
recyclable/S.
recyclage/S.
recycler/a0p+
recyclerie/S.
recycleuse/F.
rédac-chef/S.
rédaction/S.
rédactionnel/S.
rédactionnelle/F.
rédactrice/F.
redan/S.
reddition/S.
redéclarer/a0p+
................................................................................
résistible/S.
résistive/F.
résistivité/S.
résistor/S.
resituer/a0p+
resocialisation/S.
resocialiser/a0p+
resolidifier/a0p+
résolubilité/S.
résoluble/S.
résolument
résolution/S.
résolutive/F.
résolutoire/S.
résolvance/S.
................................................................................
rythmicité/S.
rythmique/S.
rythmiquement
s
s/U.||--
sa
saanen/S.

sabayon/S.
sabbat/S.
sabbathienne/F.
sabbatique/S.
sabéenne/F.
sabéisme/S.
sabelle/S.
................................................................................
sabouler/a0p+
sabra/S.
sabrage/S.
sabre/S.
sabrer/a0p+
sabretache/S.
sabreuse/F.

saburrale/W.
saburre/S.
sac/S.
sacagner/a0p+
saccade/S.
saccader/a0p+
saccage/S.
................................................................................
sagard/S.
sage/S.
sage-femme
sagement
sages-femmes
sagesse/S.
sagette/S.

sagittaire/S.
sagittale/W.
sagittée/F.
sagou/S.
sagouin/S.
sagoutier/S.
sagum/S.
................................................................................
surharmonique/S.
surhaussement/S.
surhausser/a0p+
surhomme/S.
surhumaine/F.
surhumainement
surhumanité/S.
surhydratation/S.
suricate/S.
surie/F.
surikate/S.
surimi/S.
surimposer/a0p+
surimposition/S.
surimpression/S.
................................................................................
surplomber/a0p+
surplus
surpoids
surpopulation/S.
surprenamment
surprenante/F.
surprendre/tF
surprescription/S.
surpresseur/S.
surpression/S.
surprime/S.
surprise/S.
surprise-partie
surprises-parties
surproduction/S.
................................................................................
systémicienne/F.
systémique/S.
systole/S.
systolique/S.
systyle/S.
systyle/S.
syzygie/S.
s'
s'abader
s'abriller
s'agir/fZ
t
t/||--
ta
tabac
tabac/S.
tabacologie/S.
tabacologue/S.
................................................................................
télex
télexer/a0p+
télexiste/S.
télicité/S.
tell/S.
telle/F.
telle/F.

tellement
tellière/S.
tellière/S.
telline/S.
tellurate/S.
tellure/S.
tellureuse/W.
................................................................................
temporellement
temporisation/S.
temporisatrice/F.
temporiser/a0p+
temporo-pariétale/F.
temps
temps-réel

tenable/S.
tenace/S.
tenacement
ténacité/S.
tenaille/S.
tenaillement/S.
tenailler/a0p+
................................................................................
tignasse/S.
tigrer/a0p+
tigresse/F.
tigridie/S.
tigron/S.
tiguidou/S.
tiki/S.

tilapia/S.
tilbury/S.
tilde/S.
tiliacée/S.
tillac/S.
tillage/S.
tillandsie/S.
................................................................................
tomodensitomètre/S.
tomodensitométrie/S.
tomodensitométrique/S.
tomographe/S.
tomographie/S.
tomographique/S.
tom-pouce

ton
ton/S.
tonale/F.
tonalité/S.
tondage/S.
tondaille/S.
tondaison/S.
................................................................................
transistorisation/S.
transistoriser/a0p+
transit/S.
transitaire/S.
transiter/a0p+
transition/S.
transitionnelle/F.
transitionner/a0p+
transitive/F.
transitivement
transitivité/S.
transitoire/S.
transitoirement
translater/a0p+
translatif/S.
................................................................................
triterpénique/S.
trithérapie/S.
triticale/S.
tritiée/F.
tritium/S.
triton/S.
triturable/S.
triturage/S.
triturateur/S.
trituration/S.
triturer/a0p+
triumvir/S.
triumvirale/W.
triumvirat/S.
trivalence/S.
................................................................................
trois-quarts
trois-quatre
trois-six
trôler/a0p.
troll/S.
trolle/S.
troller/a0p.
trollesque/S.
trolley/S.
trolleybus
trombe/S.
trombidion/S.
trombidiose/S.
trombine/S.
trombinoscope/S.
................................................................................
tubuline/S.
tubulopathie/S.
tubulure/S.
tudesque/S.
tudieu
tue-chien
tue-diable
tue-loup
tue-l'amour
tue-mouche
tue-mouches
tuer/a0p+
tuerie/S.
tue-tête
tueuse/F.
tuf/S.
................................................................................
tyrosinémie/S.
tyrothricine/S.
tyrrhénienne/F.
tzar/S.
tzarine/S.
tzatziki/S.
tzigane/S.
t'
t'elle/S=
t'en
t'il/S=
t'on
u
u/||--
ua/||--
ubac/S*
ubérale/S*
uberisation/S*
ubique/S*
................................................................................
vidéographique/S.
vidéoludique/S.
vidéophone/S.
vidéophonie/S.
vidéoprojecteur/S.
vidéoprojection/S.
vidéoprotection/S.
vidéo-protection/S.
vide-ordures
vidéosphère/S.
vidéosurveillance/S.
vidéotex
vidéothécaire/S.
vidéothèque/S.
vidéotransmission/S.
................................................................................
vielleur/S.
vielleuse/W.
vielliste/S.
viennoise/F.
viennoiserie/S.
vierge/S.
vierge/S.
viétique/S.
vietnamienne/F.
vieux-lille
vif-argent
vif-argent
vigésimale/W.
vigie/S.
vigilamment
................................................................................
vorace/S.
voracement
voracité/S.
vortex
vorticelle/S.
vos
vosgienne/F.
voszigues
votante/F.
votation/S.
vote/S.
voter/a0p+
voteuse/F.
votive/F.
votre
................................................................................
vouer/a0p+
vouge/S.
vouivre/S.
vouloir/S.
vouloir/pB
vous
vous
vous-même
vous-mêmes
vousseau/X.
voussoiement/S.
voussoir/S.
voussoyer/a0p+
voussure/S.
voûtain/S.
voûte/S.

Modified gc_lang/fr/oxt/Dictionnaires/dictionaries/fr-moderne.aff from [f4f2285dda] to [f70c9e0621].

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.

# AFFIXES DU DICTIONNAIRE ORTHOGRAPHIQUE FRANÇAIS “MODERNE” v6.3
# par Olivier R. -- licence MPL 2.0
# Généré le 01-07-2018 à 20:45
# Pour améliorer le dictionnaire, allez sur http://www.dicollecte.org/



SET UTF-8

WORDCHARS -’'1234567890.




|

|







1
2
3
4
5
6
7
8
9
10
11
12
13
14
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.

# AFFIXES DU DICTIONNAIRE ORTHOGRAPHIQUE FRANÇAIS “MODERNE” v7.0
# par Olivier R. -- licence MPL 2.0
# Généré le 14-09-2018 à 10:59
# Pour améliorer le dictionnaire, allez sur http://www.dicollecte.org/



SET UTF-8

WORDCHARS -’'1234567890.

Modified gc_lang/fr/oxt/Dictionnaires/dictionaries/fr-moderne.dic from [28a2fdee50] to [60b64caffa].

1
2
3
4
5
6
7
8
....
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195

1196
1197
1198
1199
1200
1201
1202
....
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
....
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
....
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
....
2336
2337
2338
2339
2340
2341
2342

2343
2344
2345
2346
2347
2348
2349
....
2395
2396
2397
2398
2399
2400
2401

2402
2403
2404
2405
2406
2407
2408
....
3159
3160
3161
3162
3163
3164
3165
3166
3167
3168
3169
3170
3171
3172
3173
....
3472
3473
3474
3475
3476
3477
3478

3479
3480
3481
3482
3483
3484
3485
....
3498
3499
3500
3501
3502
3503
3504
3505
3506
3507
3508
3509
3510
3511
3512
....
3968
3969
3970
3971
3972
3973
3974

3975
3976
3977
3978
3979
3980
3981
....
4239
4240
4241
4242
4243
4244
4245
4246
4247
4248
4249
4250
4251
4252
4253
....
4311
4312
4313
4314
4315
4316
4317

4318
4319
4320
4321
4322
4323
4324
....
4419
4420
4421
4422
4423
4424
4425
4426
4427
4428
4429
4430
4431
4432
4433
....
4480
4481
4482
4483
4484
4485
4486
4487
4488
4489
4490
4491
4492
4493
4494
....
4681
4682
4683
4684
4685
4686
4687
4688
4689
4690
4691
4692
4693
4694
4695
....
4890
4891
4892
4893
4894
4895
4896

4897
4898
4899
4900
4901
4902
4903
....
5064
5065
5066
5067
5068
5069
5070

5071
5072
5073
5074
5075
5076
5077
....
5211
5212
5213
5214
5215
5216
5217

5218
5219
5220
5221
5222
5223
5224
....
5316
5317
5318
5319
5320
5321
5322
5323
5324
5325
5326
5327
5328
5329
5330
....
5520
5521
5522
5523
5524
5525
5526
5527
5528
5529
5530
5531
5532
5533
5534
5535
5536
5537
5538
5539

5540
5541
5542
5543
5544
5545
5546
....
5760
5761
5762
5763
5764
5765
5766

5767
5768
5769
5770
5771
5772
5773
....
5821
5822
5823
5824
5825
5826
5827
5828
5829
5830
5831
5832
5833
5834
5835
....
6348
6349
6350
6351
6352
6353
6354

6355
6356
6357
6358
6359
6360
6361
....
6582
6583
6584
6585
6586
6587
6588
6589
6590
6591
6592
6593
6594
6595
6596
6597
....
6724
6725
6726
6727
6728
6729
6730


6731
6732
6733
6734
6735
6736
6737
....
6858
6859
6860
6861
6862
6863
6864
6865
6866
6867
6868
6869
6870
6871
6872
6873
....
7069
7070
7071
7072
7073
7074
7075
7076
7077
7078
7079
7080
7081
7082
7083
....
7883
7884
7885
7886
7887
7888
7889
7890
7891
7892
7893


7894
7895
7896
7897
7898
7899
7900
....
8218
8219
8220
8221
8222
8223
8224

8225
8226
8227
8228
8229
8230
8231
....
8383
8384
8385
8386
8387
8388
8389
8390
8391
8392

8393
8394
8395
8396
8397
8398
8399
.....
11748
11749
11750
11751
11752
11753
11754

11755
11756
11757
11758
11759
11760
11761
.....
14190
14191
14192
14193
14194
14195
14196

14197
14198
14199
14200
14201
14202
14203
.....
14590
14591
14592
14593
14594
14595
14596

14597
14598
14599
14600
14601
14602
14603
.....
16324
16325
16326
16327
16328
16329
16330
16331
16332

16333
16334
16335
16336
16337
16338
16339
.....
17290
17291
17292
17293
17294
17295
17296

17297
17298
17299
17300
17301
17302
17303
.....
19935
19936
19937
19938
19939
19940
19941

19942
19943
19944
19945
19946
19947
19948
.....
20270
20271
20272
20273
20274
20275
20276
20277
20278
20279
20280
20281
20282
20283
20284
.....
25993
25994
25995
25996
25997
25998
25999


26000
26001
26002
26003
26004
26005
26006
.....
26153
26154
26155
26156
26157
26158
26159
26160
26161
26162
26163
26164
26165
26166
26167

26168
26169
26170
26171
26172
26173
26174
.....
27142
27143
27144
27145
27146
27147
27148

27149
27150
27151
27152
27153
27154
27155
.....
28023
28024
28025
28026
28027
28028
28029

28030
28031
28032
28033
28034
28035
28036
.....
28493
28494
28495
28496
28497
28498
28499

28500
28501
28502
28503
28504
28505
28506
.....
30797
30798
30799
30800
30801
30802
30803



30804
30805
30806
30807
30808
30809
30810
.....
33128
33129
33130
33131
33132
33133
33134
33135
33136
33137
33138
33139
33140
33141
33142
.....
33174
33175
33176
33177
33178
33179
33180
33181
33182
33183
33184
33185
33186
33187
33188
.....
33231
33232
33233
33234
33235
33236
33237
33238
33239
33240
33241
33242
33243
33244
33245



33246
33247
33248
33249
33250
33251
33252
.....
42224
42225
42226
42227
42228
42229
42230

42231
42232
42233
42234
42235
42236
42237
.....
42296
42297
42298
42299
42300
42301
42302

42303
42304
42305
42306
42307
42308
42309
.....
43614
43615
43616
43617
43618
43619
43620

43621
43622
43623
43624
43625
43626
43627
.....
46229
46230
46231
46232
46233
46234
46235
46236
46237
46238
46239
46240
46241
46242



46243
46244
46245
46246
46247
46248
46249
.....
46594
46595
46596
46597
46598
46599
46600
46601
46602
46603
46604
46605
46606
46607
46608
.....
47419
47420
47421
47422
47423
47424
47425

47426
47427
47428
47429
47430
47431
47432
.....
48229
48230
48231
48232
48233
48234
48235
48236
48237
48238
48239
48240
48241
48242
48243
48244
.....
48567
48568
48569
48570
48571
48572
48573


48574
48575
48576
48577
48578
48579
48580
.....
49821
49822
49823
49824
49825
49826
49827
49828
49829
49830
49831
49832
49833
49834
49835
.....
50013
50014
50015
50016
50017
50018
50019

50020
50021
50022
50023
50024
50025
50026
.....
50353
50354
50355
50356
50357
50358
50359
50360
50361
50362
50363
50364
50365
50366
50367
.....
50597
50598
50599
50600
50601
50602
50603

50604
50605
50606
50607
50608
50609
50610
.....
53077
53078
53079
53080
53081
53082
53083



53084
53085
53086
53087
53088
53089
53090
.....
53664
53665
53666
53667
53668
53669
53670

53671
53672
53673
53674
53675
53676
53677
.....
53852
53853
53854
53855
53856
53857
53858
53859
53860
53861

53862
53863
53864
53865
53866
53867
53868
53869

53870
53871
53872
53873
53874
53875
53876
.....
53899
53900
53901
53902
53903
53904
53905
53906
53907
53908
53909
53910
53911
53912
53913
.....
54245
54246
54247
54248
54249
54250
54251

54252
54253
54254
54255
54256
54257
54258
.....
54301
54302
54303
54304
54305
54306
54307
54308

54309
54310
54311
54312
54313
54314
54315
.....
54492
54493
54494
54495
54496
54497
54498


54499
54500
54501
54502
54503
54504
54505
.....
55425
55426
55427
55428
55429
55430
55431

55432
55433
55434
55435
55436
55437
55438
.....
57180
57181
57182
57183
57184
57185
57186
57187
57188
57189

57190
57191
57192
57193
57194
57195
57196
.....
58452
58453
58454
58455
58456
58457
58458
58459
58460
58461
58462
58463
58464
58465
58466
58467
58468
.....
59237
59238
59239
59240
59241
59242
59243
59244
59245
59246
59247
59248
59249
59250
59251

59252
59253
59254
59255
59256
59257
59258
59259
59260
59261
59262
59263
59264
59265
59266

59267
59268
59269
59270
59271
59272
59273
59274
59275
59276
59277
59278
59279

59280
59281
59282
59283
59284
59285
59286
.....
59449
59450
59451
59452
59453
59454
59455
59456
59457
59458
59459
59460
59461
59462
59463
59464

59465
59466
59467
59468
59469
59470
59471
.....
60651
60652
60653
60654
60655
60656
60657

60658
60659
60660
60661
60662
60663
60664
.....
60706
60707
60708
60709
60710
60711
60712
60713
60714

60715
60716
60717
60718
60719
60720
60721
.....
61718
61719
61720
61721
61722
61723
61724

61725
61726
61727
61728
61729
61730
61731
.....
61877
61878
61879
61880
61881
61882
61883
61884
61885
61886
61887
61888
61889
61890
61891
.....
61923
61924
61925
61926
61927
61928
61929

61930
61931
61932
61933
61934
61935
61936
.....
62826
62827
62828
62829
62830
62831
62832
62833
62834
62835
62836
62837
62838
62839
62840
.....
62913
62914
62915
62916
62917
62918
62919
62920
62921
62922
62923
62924
62925
62926
62927
62928
.....
63069
63070
63071
63072
63073
63074
63075

63076
63077
63078
63079
63080
63081
63082
.....
63184
63185
63186
63187
63188
63189
63190

63191
63192
63193
63194
63195
63196
63197
63198
63199
63200
63201
63202
63203
63204
63205
.....
63452
63453
63454
63455
63456
63457
63458
63459
63460
63461
63462
63463
63464
63465

63466
63467
63468
63469
63470
63471
63472
63473
63474
63475
63476
63477
63478
.....
63499
63500
63501
63502
63503
63504
63505
63506
63507
63508
63509
63510
63511
63512
63513
63514
63515
63516
63517
63518
63519

63520
63521
63522
63523
63524
63525
63526
.....
63611
63612
63613
63614
63615
63616
63617
63618
63619
63620
63621
63622
63623
63624
63625
63626
63627
63628
63629
63630


63631
63632
63633
63634
63635
63636
63637
.....
64340
64341
64342
64343
64344
64345
64346

64347
64348
64349
64350
64351
64352
64353
.....
64656
64657
64658
64659
64660
64661
64662

64663
64664
64665
64666
64667
64668
64669
.....
64997
64998
64999
65000
65001
65002
65003

65004
65005
65006
65007
65008
65009
65010
.....
66298
66299
66300
66301
66302
66303
66304

66305
66306
66307
66308
66309
66310
66311
.....
67764
67765
67766
67767
67768
67769
67770
67771
67772
67773
67774
67775
67776
67777
67778
.....
67806
67807
67808
67809
67810
67811
67812
67813
67814
67815
67816
67817
67818
67819
67820
.....
67916
67917
67918
67919
67920
67921
67922
67923
67924
67925
67926
67927
67928
67929
67930
.....
72611
72612
72613
72614
72615
72616
72617

72618
72619
72620
72621
72622
72623
72624
.....
72724
72725
72726
72727
72728
72729
72730

72731
72732
72733
72734
72735
72736
72737
.....
73124
73125
73126
73127
73128
73129
73130




73131
73132
73133
73134
73135
73136
73137
.....
73968
73969
73970
73971
73972
73973
73974
73975
73976
73977
73978
73979
73980
73981
73982
.....
74027
74028
74029
74030
74031
74032
74033
74034
74035
74036
74037
74038
74039
74040
74041
.....
74796
74797
74798
74799
74800
74801
74802
74803
74804
74805
74806
74807
74808
74809
74810
.....
75058
75059
75060
75061
75062
75063
75064
75065
75066
75067
75068
75069
75070
75071
75072
.....
75736
75737
75738
75739
75740
75741
75742

75743
75744
75745
75746
75747
75748
75749
.....
76372
76373
76374
76375
76376
76377
76378

76379
76380
76381
76382
76383
76384
76385
.....
76432
76433
76434
76435
76436
76437
76438

76439
76440
76441
76442
76443
76444
76445
.....
76661
76662
76663
76664
76665
76666
76667
76668
76669
76670
76671
76672
76673
76674
76675
76676
.....
76907
76908
76909
76910
76911
76912
76913





76914
76915
76916
76917
76918
76919
76920
.....
78263
78264
78265
78266
78267
78268
78269

78270
78271
78272
78273
78274
78275
78276
.....
78302
78303
78304
78305
78306
78307
78308

78309
78310
78311
78312
78313
78314
78315
.....
78849
78850
78851
78852
78853
78854
78855

78856
78857
78858
78859
78860
78861
78862
.....
78864
78865
78866
78867
78868
78869
78870
78871

78872
78873
78874
78875
78876
78877
78878
79556
&
1er/--
1ers/--
1re/--
1res/--
1ʳᵉ/--
1ʳᵉˢ/--
................................................................................
Bradley
Bradley
Brafman
Brahim
Brahma
Brahmapoutre
Brahms
Braine-l'Alleud
Braine-le-Château
Braine-le-Comte

Brakel
Brand
Brandon
Brasov
Brassac
Brasschaat
Brassica
................................................................................
Casanova
Casey
Casimir
Casimir-Perier
Caspienne
Cassandra
Cassandre
Casseurs_Flowters
Cassidy
Cassini
Cassiopée
Castafolte
Castanet-Tolosan
Castelnaudary
Castelnau-le-Lez
................................................................................
Charybde
Chase
Chasles
Chastel-Arnaud
Château-Gontier
Château-Thierry
Châteaubriant
Château-d'Œx
Château-d'Olonne
Châteaudouble
Châteaudun
Châteauguay
Châteauneuf-du-Pape
Châteauneuf-les-Martigues
Châteaurenard
Châteauroux
Châtelain
Châtelet
................................................................................
DEUG
DFSG
DG
DGSE
DGSI
DHCP
DHEA
D'Holbach
DJ
DM
DNS
DOM
DOM-TOM
DPTH
DREES
................................................................................
Dynkin
Dysnomie
Dʳ
Dʳˢ
Dʳᵉ
Dʳᵉˢ
Dᴏꜱꜱᴍᴀɴɴ

ECS/L'D'Q'
EDF/L'D'Q'
EEPROM/L'D'Q'
EFREI/L'D'Q'
EFS/L'D'Q'
EIB/L'D'Q'
ENA/L'D'Q'
................................................................................
Eeklo/L'D'Q'
Eeyou/L'
Effinergie
Égée/L'D'Q'
Éghezée/L'D'Q'
Églantine/L'D'Q'
Égypte/L'D'

Ehrenpreis/L'D'Q'
Ehresmann/L'D'Q'
Eibit/||--
Eiffel/L'D'Q'
Eileen/L'D'Q'
Eilenberg/L'D'Q'
Eilleen/L'D'Q'
................................................................................
Goebbels
Goëmar
Goethe
Gogh
Gogol
Golan
Goldbach
Golden_Show
Goldoni
Golgi
Golgotha
Goliath
Gomorrhe
Goncourt
Gondwana
................................................................................
Helvétie/L'D'
Hem/L'D'Q'
Hemiksem/L'D'Q'
Hemingway/L'D'Q'
Henan
Hénault
Hendaye/L'D'Q'

Hénin-Beaumont/L'D'Q'
Hennebont/L'D'Q'
Hénoch/L'D'Q'
Henri/L'D'Q'
Henriette/L'D'Q'
Henrique/L'D'Q'
Henry
................................................................................
Héricourt/L'D'Q'
Hermann/L'D'Q'
Hermès/L'D'Q'
Hermine/L'D'Q'
Hermione/L'D'Q'
Hermite/L'D'Q'
Hernando/L'D'Q'
Hero_Corp
Hérode/L'D'Q'
Hérodote/L'D'Q'
Hérouville-Saint-Clair/L'D'Q'
Herschel/L'D'Q'
Herselt/L'D'Q'
Herstal
Hertz
................................................................................
Joanna
Joannie
Joaquim
Jocelyn
Jocelyne
Joconde
Jocrisse

Jodie
Jodoigne
Jody
Joe
Joël
Joëlle
Joey
................................................................................
Kjeldahl
Klaus
Klee
Klein
Klimt
Klitzing
Klondike
K'nex
Knokke-Heist
Knossos
Ko/||--
Kobe
Koch
Kodaira
Koekelberg
................................................................................
Kuurne
Kyle
Kylian
Kylie
Kyllian
Kyoto
Kyushu

L/U.||--
LCD
LED
LGBT
LGBTI
LGBTIQ
LGV
................................................................................
Laval
Lavaur
Laveran
Lavoisier
Lawrence
Laxou
Lazare
Le_Bris
Léa
Leah
Léandre
Léane
Lebbeke
Lebesgue
Lebrun
................................................................................
Léonore
Léontine
Léopold
Léopoldine
Leopoldt
Léopoldville
Leroy
Les_Vigneaux
Lesage
Lesbos
Lesieur
Lesley
Leslie
Lesneven
Lesotho
................................................................................
Louvain-la-Neuve
Louvière
Louviers
Louvre
Love
Lovecraft
Lovelace
Lovely_Rita
Lovćen
Loyola
Loyre
Lozère
Luanda
Lubbeek
Lübeck
................................................................................
Mammon
Manach
Managua
Manama
Manaus
Manche
Manchester

Mandchourie
Mandela
Mandelbrot
Mandelieu-la-Napoule
Mandor
Mandy
Manet
................................................................................
Maslow
Mason
Massachusetts
Masséna
Massenet
Massimo
Massy

Masutti
Matchstick
Mateo
Mathéo
Matheron
Matheson
Mathias
................................................................................
Mérimée
Merkel
Merleau-Ponty
Merlin
Méru
Meryl
Mésie

Mésopotamie
Messaline
Messer
Messine
Météo-France
Mettet
Metz
................................................................................
Mithra
Mithridate
Mitnick
Mitry-Mory
Mitsubishi
Mittelhausbergen
Mitterrand
Mix_Bizarre
Miyabi
Mlle/S.
Mme/S.
Mnémosyne
Mo/||--
Moab
Möbius
................................................................................
Mᵍʳˢ
Mᵐᵉ
Mᵐᵉˢ
N/U.||--
NASA
NDLR
NDT
N'Djamena
NEC
NF
NIRS
NSA
Nabil
Nabuchodonosor
Nacira
Nadège
Nadia
Nadim
Nadine
Nadir

Nagasaki
Nagata
Nagoya
Nagy
Nahum
Naimark
Nairobi
................................................................................
Nusselt
Nuuk
Nvidia
Nyarlathotep
Nyons
Nyquist
Nyx

OCDE/L'D'Q'
OCaml/L'D'Q'
ODF/L'D'Q'
Œdipe/L'D'Q'
OFBiz/D'Q'
OFCE/L'D'Q'
OGM/L'D'Q'
................................................................................
Oignies/L'D'Q'
Oisans/L'
Oise/L'
Oissel/L'D'Q'
Oklahoma/L'D'
Olaf/L'D'Q'
Oldham/L'D'Q'
Olea_Medical
Oleg/L'D'Q'
Olen/L'D'Q'
Oléron/L'D'Q'
Olga/L'D'Q'
Oliver/L'D'Q'
Olivet/L'D'Q'
Olivia/L'D'Q'
................................................................................
Pullman
Pune
Purcell
Puteaux
Puurs
Puy-de-Dôme
Puy-en-Velay

Pyongyang
Pyrénées
Pyrénées-Atlantiques
Pyrénées-Orientales
Pyrrha
Pyrrhus
Pythagore
................................................................................
Rivery
Riviera
Rivière-Pilote
Rivière-Salée
Rixensart
Rixheim
Riyad
R'lyeh
R'n'B
Roanne
Rob
Robert
Roberta
Roberte
Roberto
Roberval
................................................................................
Ruth
Rutherford
Rutishauser
Rwanda
Ryan
Ryanair
Ryxeo


S/U.||--
SA
SADT
SAP
SARL
SCIC
SCOT
................................................................................
Saint-Louis
Saint-Malo
Saint-Mandé
Saint-Marin
Saint-Martin
Saint-Martin-Boulogne
Saint-Martin-Petit
Saint-Martin-d'Hères
Saint-Martin-de-Crau
Saint-Maur-des-Fossés
Saint-Maurice
Saint-Max
Saint-Maximin-la-Sainte-Baume
Saint-Médard-en-Jalles
Saint-Michel-de-Feins
Saint-Michel-sur-Orge
................................................................................
Schwerin
Schwytz
Schwyz
Scipion
Scott
Scoville
Scrameustache/S.
Scred_TV
Scudéry
Scylla
SeaMonkey
Seagate
Seamus
Sean
Seat
................................................................................
Vachez
Vadim
Vaduz
Vahan
Vaires-sur-Marne
Valais
Valbonne
Val-d'Oise
Val-d'Or
Val-de-Marne
Val-de-Reuil


Valence
Valenciennes
Valentigney
Valentin
Valentina
Valentine
Valentinien
................................................................................
Wilfred
Wilfrid
Wilfried
Wilhelm
Will
Willa
Willebroek

William
Williams
Willie
Willy
Wilma
Wilson
Windhoek
................................................................................
Xavière
Xe/--
Xebia
Xenia
Xénophane
Xénophon
Xerxès
Xi'an
Xining
Xinjiang

Xᵉ/--
YHWH
Yacine
Yaël
Yaëlle
Yahvé
Yahweh
................................................................................
annulable/S*
annulaire/S*
annulation/S*
annulative/F*
annulatrice/F*
annulement/S*
annuler/a4p+

annuus
anoblir/f4p+
anoblissante/F*
anoblissement/S*
anode/S*
anodine/F*
anodique/S*
................................................................................
autoclave/S*
autoclave/S*
autoclaviste/S*
autocollante/F*
autocommutateur/S*
autocompenser/a4p+
auto-compenser/a4p+

autoconcurrence/S*
autoconditionnement/S*
auto-conditionnement/S*
autoconduction/S*
autoconservation/S*
autoconsommation/S*
autoconstruction/S*
................................................................................
avionneuse/F*
avions-cargos/D'Q'
avipelvien/S*
aviron/S*
avirulence/S*
avis/L'D'Q'
aviser/a4p+

aviso/S*
avitaillement/S*
avitailler/a4p+
avitailleuse/F*
avitaminose/S*
avivage/S*
avivement/S*
................................................................................
binocle/S.
binoculaire/S.
binodale/S.
binôme/S.
binomiale/W.
binominale/W.
binouze/S.
bin's
bintje/S.

bio
bio/S.
bioabsorbable/S.
bioaccumulable/S.
bioaccumulation/S.
bioacoustique/S.
bioagresseur/S.
................................................................................
boui-boui
bouif/S.
bouillabaisse/S.
bouillage/S.
bouillante/F.
bouillasse/S.
bouille/S.

bouilleuse/F.
bouillie/S.
bouillir/iQ
bouillissage/S.
bouilloire/S.
bouillon/S.
bouillonnante/F.
................................................................................
caviarder/a0p+
cavicorne/S.
caviste/S.
cavitaire/S.
cavitation/S.
cavité/S.
cd/U.||--

ce
céans
cébette/S.
cébiste/S.
ceci
cécidie/S.
cécidomyie/S.
................................................................................
cesse
cesser/a0p+
cessez-le-feu
cessibilité/S.
cessible/S.
cession/S.
cessionnaire/S.
c'est-à-dire
ceste/S.
cestode/S.
césure/S.
cet
cétacé/S.
cétane/S.
céteau/X.
................................................................................
cytosolique/S.
cytosquelette/S.
cytostatique/S.
cytotoxicité/S.
cytotoxique/S.
czardas
czimbalum/S.


d
d/||--
dB/||--
daba/S.
dacite/S.
dacryoadénite/S.
dacryocystite/S.
................................................................................
datte/S.
dattier/S.
datura/S.
daube/S.
dauber/a0p+
daubeuse/F.
daubière/S.
d'aucuns
dauphine/F.
dauphinelle/S.
dauphinoise/F.
daurade/S.
davantage
davier/S.
dazibao/S.

de
dé/S.
déactiver/a0p+
deal/S.
dealer/S.
dealer/a0p+
déambulateur/S.
................................................................................
dégeler/b0p+
dégénération/S.
dégénérative/F.
dégénérée/F.
dégénérer/c0p+
dégénérescence/S.
dégénérescente/F.

dégerbage/S.
dégermage/S.
dégermer/a0p+
dégingander/a0p+
dégîter/a0p+
dégivrage/S.
dégivrante/F.
................................................................................
dépoitrailler/a0p+
dépolarisation/S.
dépolariser/a0p+
dépolir/f0p+
dépolissage/S.
dépolitisation/S.
dépolitiser/a0p+

dépolluer/a0p+
dépollution/S.
dépolymérisation/S.
dépolymériser/a0p+
déponente/F.
dépontiller/a0p.
dépopulation/S.
................................................................................
désensibilisation/S.
désensibiliser/a0p+
désensorceler/d0p+
désentoilage/S.
désentoiler/a0p+
désentortiller/a0p+
désentraver/a0p+

désenvasement/S.
désenvaser/a0p+
désenvelopper/a0p+
désenvenimer/a0p+
désenverguer/a0p+
désenvoûtement/S.
désenvoûter/a0p+
................................................................................
dystrophine/S.
dystrophique/S.
dystrophisation/S.
dysurie/S.
dysurique/S.
dytique/S.
dzêta



e
eV/U.||--
eau/X*
eau-de-vie/L'D'Q'
eau-forte/L'D'Q'
eaux-de-vie/D'Q'
eaux-fortes/D'Q'
................................................................................
entraccorder/a6p+
entraccuser/a6p+
entracte/S*
entradmirer/a6p+
entraide/S*
entraider/a6p+
entrailles/D'Q'
entr'aimer/a6p+
entrain/S*
entraînable/S*
entraînante/F*
entraînement/S*
entraîner/a4p+
entraîneuse/F*
entrait/S*
................................................................................
entrées-sorties
entrefaite/S*
entrefaites
entrefer/S*
entrefilet/S*
entre-frapper/a6p+
entregent/S*
entr'égorger/a6p+
entre-haïr/fB
entre-heurter/a6p+
entrejambe/S*
entrelacement/S*
entrelacer/a4p+
entrelacs/L'D'Q'
entrelarder/a2p+
................................................................................
entretoiser/a2p+
entre-tuer/a6p+
entrevoie/S*
entrevoir/pF
entrevous/L'D'Q'
entrevoûter/a2p+
entrevue/S*
entr'hiverner
entrisme/S*
entropie/S*
entropion/S*
entropique/S*
entroque/S*
entrouvrir/iC
entrure/S*



entuber/a2p+
enturbanner/a4p+
enture/S*
énucléation/S*
énucléer/a2p+
énumérabilité/S*
énumérable/S*
................................................................................
hypercentre/S*
hyperchimie/S*
hyperchlorhydrie/S*
hypercholestérolémie/S*
hypercholestérolémique/S*
hyperchrome/S*
hyperchromie/S*

hypercomplexe/S*
hyperconformisme/S*
hyperconnectée/F*
hypercontinentale/W*
hypercontrôle/S*
hypercorrecte/F*
hypercorrection/S*
................................................................................
hypernova/L'D'Q'
hypernovæ/D'Q'
hypéron/S*
hyperonyme/S*
hyperonymie/S*
hyperonymique/S*
hyperostose/S*

hyperparasite/S*
hyperparathyroïdie/S*
hyperphagie/S*
hyperphagique/S*
hyperphalangie/S*
hyperplan/S*
hyperplasie/S*
................................................................................
incrémentale/W*
incrémentalement/D'Q'
incrémentation/S*
incrémenter/a2p+
incrémentielle/F*
increvable/S*
incriminable/S*

incrimination/S*
incriminer/a4p+
incristallisable/S*
incritiquable/S*
incrochetable/S*
incroyable/S*
incroyablement/D'Q'
................................................................................
juron/S.
jury/S.
jus
jusant/S.
jusée/S.
jusnaturalisme/S.
jusnaturaliste/S.
jusqu/--
jusqu'au-boutisme/S.
jusqu'au-boutiste/S.
jusque
jusque-là
jusques
jusquiame/S.



jussiée/S.
jussion/S.
justaucorps
juste
juste/S.
juste-à-temps
justement
................................................................................
kyrie
kyrielle/S.
kyriologique/S.
kyste/S.
kystique/S.
kyu/S.
kyudo/S.
l
l
l/U.||--
là
la
la
la
labadens
................................................................................
leude/S.
leur
leur
leur/S.
leurre/S.
leurrer/a0p+
leurs

lev/S.
levage/S.
levageuse/F.
levain/S.
levalloisien/S.
levalloisienne/F.
lévamisole/S.
................................................................................
loricaire/S.
lorientaise/F.
loriot/S.
loriquet/S.
lorraine/F.
lorry/A.
lors
lorsqu/--
lorsque
losange/S.
losangée/F.
losangique/S.
loser/S.
lot/S.
loterie/S.
lotier/S.
................................................................................
lysogénie/S.
lysogénique/S.
lysosomale/W.
lysosome/S.
lysosomiale/W.
lysozyme/S.
lytique/S.


m
m/U.||--
mCE
mR/||--
ma
maar/S.
maboule/F.
................................................................................
mastopathie/S.
mastose/S.
mastroquet/S.
masturbation/S.
masturbatoire/S.
masturbatrice/F.
masturber/a0p+
m'as-tu-vu
masure/S.
masurium/S.
mât/S.
matabiche/S.
matabicher/a0p+
matador/S.
mataf/S.
................................................................................
mazouter/a0p+
mazurka/S.
mbalax
mbar/||--
me
mea-culpa
méandre/S.

méandriforme/S.
méandrine/S.
méat/S.
méatoscopie/S.
mébibit/S.
mébioctet/S.
mec/S.
................................................................................
mémoration/S.
mémorial/X.
mémorialiste/S.
mémorielle/F.
mémorisable/S.
mémorisation/S.
mémoriser/a0p+
m'en
menaçante/F.
menace/S.
menacer/a0p+
ménade/S.
ménage/S.
ménageable/S.
ménagement/S.
................................................................................
mésestimer/a0p+
mésiale/W.
mésintelligence/S.
mésinterprétation/S.
mésique/S.
mesmérienne/F.
mesmérisme/S.

mésoblaste/S.
mésoblastique/S.
mésocarpe/S.
mésocentre/S.
mésocéphale/S.
mésocéphale/S.
mésocéphalique/S.
................................................................................
myxœdème/S.
myxomatose/S.
myxome/S.
myxomycète/S.
myxovirus
m²
m³



n
na
naan/S.
nabab/S.
nabatéenne/F.
nabi/S.
nabisme/S.
................................................................................
neuraminique/S.
neurasthénie/S.
neurasthénique/S.
neurinome/S.
neuroanatomie/S.
neuroanatomique/S.
neuroanatomiste/S.

neurobiochimie/S.
neurobiochimique/S.
neurobiochimiste/S.
neurobiologie/S.
neurobiologique/S.
neurobiologiste/S.
neuroblaste/S.
................................................................................
nicotiniser/a0p+
nicotinisme/S.
nictation/S.
nictitante/F.
nictitation/S.
nid/S.
nidation/S.
nid-d'abeilles
nid-de-pie
nid-de-poule

nidicole/S.
nidification/S.
nidificatrice/F.
nidifier/a0p.
nidifuge/S.
nids-d'abeilles
nids-de-pie
nids-de-poule

nièce/S.
niellage/S.
nielle/S.
nieller/a0p+
nielleur/S.
niellure/S.
nier/a0p+
................................................................................
nilpotente/F.
nilvariété/S.
nimbe/S.
nimber/a0p+
nimbostratus
nimbus
nîmoise/F.
n'importe
ninas
ninja/S.
ninjato/S.
niobate/S.
niobite/S.
niobium/S.
niôle/S.
................................................................................
nosologie/S.
nosologique/S.
nosophobie/S.
nostalgie/S.
nostalgique/S.
nostalgiquement
nostoc/S.

notabilité/S.
notable/S.
notable/S.
notablement
notaire/S.
notairesse/S.
notamment
................................................................................
nourrissante/F.
nourrissement/S.
nourrisseur/S.
nourrisson/S.
nourriture/S.
nous
nous
nous-même/S=

nouure/S.
nouveau-née/F.
nouveauté/S.
nouvel
nouvelle/W.
nouvellement
nouvelleté/S.
................................................................................
nymphette/S.
nympho/S.
nymphomane/S.
nymphomanie/S.
nymphoplastie/S.
nymphose/S.
nystagmus


ô
o
o/||--
oaï/S*
oaristys/L'D'Q'
oasienne/F*
oasis/L'D'Q'
................................................................................
orchestration/S*
orchestratrice/F*
orchestre/S*
orchestrer/a2p+
orchidacée/S*
orchidacée/S*
orchidée/S*

orchi-épididymite/S*
orchis/L'D'Q'
orchite/S*
ordalie/S*
ordalique/S*
ordi/S*
ordinaire/S*
................................................................................
pas-à-pas
pas-à-pas
pascal/Um
pascale/F.
pascalienne/F.
pascaline/S.
pascaux
pas-d'âne
pas-de-géant
pas-de-porte

paseo/S.
pashmina/S.
pasionaria/S.
paso-doble
pasquin/S.
pasquinade/S.
passable/S.
................................................................................
pétiole/S.
pétiolée/F.
petiote/F.
petit-beurre
petit-bois
petit-bourgeois
petit-boutiste/S.
petit-déj'
petit-déjeuner
petit-déjeuner/a0p.
petite/F.
petite-bourgeoise
petite-fille
petite-maîtresse
petitement
petite-nièce
petites-bourgeoises
................................................................................
pie
pie/S.
pièce/S.
piécette/S.
pied/S.
pied-à-terre
pied-bot
pied-d'alouette
pied-de-biche
pied-de-cheval
pied-de-chèvre
pied-de-loup
pied-de-mouton
pied-de-poule
pied-de-veau

pied-d'oiseau
piédestal/X.
pied-noir
piédouche/S.
pied-plat
piédroit/S.
pieds-bots
pieds-d'alouette
pieds-de-biche
pieds-de-cheval
pieds-de-chèvre
pieds-de-loup
pieds-de-mouton
pieds-de-poule
pieds-de-veau

pieds-d'oiseau
pieds-noirs
pieds-plats
piéfort/S.
piège/S.
piégeable/S.
piégeage/S.
piéger/c0p+
piégeuse/F.
piégeuse/W.
pie-grièche
piémont/S.
piémontaise/F.

piercing/S.
piéride/S.
pierrade/S.
pierrage/S.
pierraille/S.
pierre/S.
pierrer/a0p+
................................................................................
pinière/S.
pinne/S.
pinnipède/S.
pinnothère/S.
pinnule/S.
pinocytose/S.
pinot/S.
pin's
pinson/S.
pintade/S.
pintadeau/X.
pintadine/S.
pinte/S.
pinter/a0p+
pin-up
pinyin

piochage/S.
pioche/S.
piochement/S.
piocher/a0p+
piocheuse/F.
pioger/a0p.
piolet/S.
................................................................................
pompeuse/F.
pompeuse/W.
pompeusement
pompière/F.
pompiérisme/S.
pompile/S.
pompiste/S.

pompon/S.
pomponner/a0p+
ponant/S.
ponantaise/F.
ponçage/S.
ponce/S.
ponce/S.
................................................................................
pontifiante/F.
pontificale/W.
pontificalement
pontificat/S.
pontifier/a0p.
pontil/S.
pontiller/a0p+
pont-l'évêque
pont-levis

ponton/S.
pontonnier/S.
pont-promenade
ponts-levis
ponts-promenades
pontuseau/X.
pool/S.
................................................................................
président-directeur
présidente/F.
présidente-directrice
présidentes-directrices
présidentiable/S.
présidentialisation/S.
présidentialisme/S.

présidentielle/F.
présidents-directeurs
présider/a0p+
présidial/X.
présidiale/W.
présidialité/S.
présidium/S.
................................................................................
priapisme/S.
prie-Dieu
prier/a0p+
prière/S.
prieure/F.
prieuré/S.
prieuse/S.
prim'Holstein
prima-donna/I.
primage/S.
primaire/S.
primairement
primale/W.
primalité/S.
primarisation/S.
................................................................................
primo-délinquante/F.
primogéniture/S.
primo-infection/S.
primordiale/W.
primordialement
primordialité/S.
primulacée/S.

prince-de-galles
prince-de-galles
princeps
princeps
princesse/F.
princière/F.
princièrement
................................................................................
ptérosaure/S.
ptérosaurien/S.
ptérygion/S.
ptérygoïde/S.
ptérygoïdienne/F.
ptérygote/S.
ptérygotus
p'tite/F.
ptolémaïque/S.
ptoléméenne/F.
ptomaïne/S.
ptôse/S.
ptosis
ptôsis
ptyaline/S.
................................................................................
puis
puisage/S.
puisard/S.
puisatier/S.
puisement/S.
puiser/a0p+
puisette/S.
puisqu/--
puisque
puissamment
puissance/S.
puissante/F.
puits
pulicaire/S.
pulicaire/S.
pull/S.
................................................................................
pycnogonide/S.
pycnomètre/S.
pycnose/S.
pycnotique/S.
pyélite/S.
pyélonéphrite/S.
pygargue/S.

pygmée/S.
pygméenne/F.
pyjama/S.
pylône/S.
pylore/S.
pylorique/S.
pyocyanique/S.
................................................................................
pythique/S.
pythique/S.
python/S.
pythonisse/S.
pyurie/S.
pyxide/S.
pz/||--

q
qPCR
qanat/S.
qatarie/F.
qatarienne/F.
qbit/S.
qi
qu/--
qua
quad/S.
quadra
quadra/S.
quadragénaire/S.
quadragésimale/W.
quadragésime/S.
................................................................................
québécoise/F.
quebracho/S.
quechua/S.
queen/S.
queer/S.
quelconque/S.
quelle/F.
quelqu/--
quelque
quelque/S.
quelque/S.
quelquefois
quelques-unes
quelques-uns

quelqu'un
quelqu'une
quémande/S.
quémander/a0p+
quémandeuse/F.
qu'en-dira-t-on
quenelle/S.
quenotte/S.
quenouille/S.
quenouillée/S.
quenouillette/S.
quéquette/S.
quérable/S.
................................................................................
quêteuse/F.
quetsche/S.
quetschier/S.
quetter/a0p+
quetzal/S.
quetzales
queue/S.
queue-d'aronde
queue-de-cheval
queue-de-cochon
queue-de-morue
queue-de-pie
queue-de-rat
queue-de-renard
queues-d'aronde
queues-de-cheval
queues-de-cochon
queues-de-morue
queues-de-pie
queues-de-rat
queues-de-renard

queuillère/S.
queursage/S.
queusot/S.
queutarde/F.
queuter/a0p+
queux
qui
................................................................................
qui-vive
quiz
quizalofop/S.
quo
quoailler
quoc-ngu
quoi
quoiqu/--
quoique
quolibet/S.
quorum/S.
quota/S.
quote-part
quotes-parts
quotidienne/F.
quotidiennement
quotidienneté/S.
quotient/S.
quotité/S.
quotter/a0p.


qwerty
r
ra
rab/S.
rabâchage/S.
rabâchement/S.
rabâcher/a0p+
................................................................................
ratichon/S.
raticide/S.
raticide/S.
ratière/F.
ratification/S.
ratifier/a0p+
ratinage/S.

ratiner/a0p+
rating/S.
ratio/S.
ratiocinante/F.
ratiocination/S.
ratiociner/a0p.
ratiocineuse/F.
................................................................................
recéder/c0p+
recel/S.
recèlement/S.
receler/b0p+
receleuse/F.
récemment
récence/S.

recensement/S.
recenser/a0p+
recenseuse/F.
recension/S.
récente/F.
recentrage/S.
recentrement/S.
................................................................................
récuser/a0p+
recyclabilité/S.
recyclable/S.
recyclage/S.
recycler/a0p+
recyclerie/S.
recycleuse/F.

rédaction/S.
rédactionnel/S.
rédactionnelle/F.
rédactrice/F.
redan/S.
reddition/S.
redéclarer/a0p+
................................................................................
résistible/S.
résistive/F.
résistivité/S.
résistor/S.
resituer/a0p+
resocialisation/S.
resocialiser/a0p+

résolubilité/S.
résoluble/S.
résolument
résolution/S.
résolutive/F.
résolutoire/S.
résolvance/S.
................................................................................
rythmicité/S.
rythmique/S.
rythmiquement
s
s/U.||--
sa
saanen/S.
s'abader
sabayon/S.
sabbat/S.
sabbathienne/F.
sabbatique/S.
sabéenne/F.
sabéisme/S.
sabelle/S.
................................................................................
sabouler/a0p+
sabra/S.
sabrage/S.
sabre/S.
sabrer/a0p+
sabretache/S.
sabreuse/F.
s'abriller
saburrale/W.
saburre/S.
sac/S.
sacagner/a0p+
saccade/S.
saccader/a0p+
saccage/S.
................................................................................
sagard/S.
sage/S.
sage-femme
sagement
sages-femmes
sagesse/S.
sagette/S.
s'agir/fZ
sagittaire/S.
sagittale/W.
sagittée/F.
sagou/S.
sagouin/S.
sagoutier/S.
sagum/S.
................................................................................
surharmonique/S.
surhaussement/S.
surhausser/a0p+
surhomme/S.
surhumaine/F.
surhumainement
surhumanité/S.

suricate/S.
surie/F.
surimi/S.
surimposer/a0p+
surimposition/S.
surimpression/S.
surimpressionner/a0p+
................................................................................
surplomber/a0p+
surplus
surpoids
surpopulation/S.
surprenamment
surprenante/F.
surprendre/tF

surpresseur/S.
surpression/S.
surprime/S.
surprise/S.
surprise-partie
surprises-parties
surproduction/S.
................................................................................
systémicienne/F.
systémique/S.
systole/S.
systolique/S.
systyle/S.
systyle/S.
syzygie/S.




t
t/||--
ta
tabac
tabac/S.
tabacologie/S.
tabacologue/S.
................................................................................
télex
télexer/a0p+
télexiste/S.
télicité/S.
tell/S.
telle/F.
telle/F.
t'elle/S=
tellement
tellière/S.
tellière/S.
telline/S.
tellurate/S.
tellure/S.
tellureuse/W.
................................................................................
temporellement
temporisation/S.
temporisatrice/F.
temporiser/a0p+
temporo-pariétale/F.
temps
temps-réel
t'en
tenable/S.
tenace/S.
tenacement
ténacité/S.
tenaille/S.
tenaillement/S.
tenailler/a0p+
................................................................................
tignasse/S.
tigrer/a0p+
tigresse/F.
tigridie/S.
tigron/S.
tiguidou/S.
tiki/S.
t'il/S=
tilapia/S.
tilbury/S.
tilde/S.
tiliacée/S.
tillac/S.
tillage/S.
tillandsie/S.
................................................................................
tomodensitomètre/S.
tomodensitométrie/S.
tomodensitométrique/S.
tomographe/S.
tomographie/S.
tomographique/S.
tom-pouce
t'on
ton
ton/S.
tonale/F.
tonalité/S.
tondage/S.
tondaille/S.
tondaison/S.
................................................................................
transistorisation/S.
transistoriser/a0p+
transit/S.
transitaire/S.
transiter/a0p+
transition/S.
transitionnelle/F.

transitive/F.
transitivement
transitivité/S.
transitoire/S.
transitoirement
translater/a0p+
translatif/S.
................................................................................
triterpénique/S.
trithérapie/S.
triticale/S.
tritiée/F.
tritium/S.
triton/S.
triturable/S.

triturateur/S.
trituration/S.
triturer/a0p+
triumvir/S.
triumvirale/W.
triumvirat/S.
trivalence/S.
................................................................................
trois-quarts
trois-quatre
trois-six
trôler/a0p.
troll/S.
trolle/S.
troller/a0p.

trolley/S.
trolleybus
trombe/S.
trombidion/S.
trombidiose/S.
trombine/S.
trombinoscope/S.
................................................................................
tubuline/S.
tubulopathie/S.
tubulure/S.
tudesque/S.
tudieu
tue-chien
tue-diable
tue-l'amour
tue-loup
tue-mouche
tuer/a0p+
tuerie/S.
tue-tête
tueuse/F.
tuf/S.
tuffeau/X.
................................................................................
tyrosinase/S.
tyrosine/S.
tyrosinémie/S.
tyrothricine/S.
tyrrhénienne/F.
tzatziki/S.
tzigane/S.





u
u/||--
ua/||--
ubac/S*
ubérale/S*
uberisation/S*
ubique/S*
................................................................................
vidéographique/S.
vidéoludique/S.
vidéophone/S.
vidéophonie/S.
vidéoprojecteur/S.
vidéoprojection/S.
vidéoprotection/S.

vide-ordures
vidéosphère/S.
vidéosurveillance/S.
vidéotex
vidéothécaire/S.
vidéothèque/S.
vidéotransmission/S.
................................................................................
vielleur/S.
vielleuse/W.
vielliste/S.
viennoise/F.
viennoiserie/S.
vierge/S.
vierge/S.

vietnamienne/F.
vieux-lille
vif-argent
vif-argent
vigésimale/W.
vigie/S.
vigilamment
................................................................................
vorace/S.
voracement
voracité/S.
vortex
vorticelle/S.
vos
vosgienne/F.

votante/F.
votation/S.
vote/S.
voter/a0p+
voteuse/F.
votive/F.
votre
................................................................................
vouer/a0p+
vouge/S.
vouivre/S.
vouloir/S.
vouloir/pB
vous
vous
vous-même/S=

vousseau/X.
voussoiement/S.
voussoir/S.
voussoyer/a0p+
voussure/S.
voûtain/S.
voûte/S.
|







 







<


>







 







<







 







|
|
|
|







 







<







 







>







 







>







 







<







 







>







 







<







 







>







 







<







 







>







 







<







 







<







 







<







 







>







 







>







 







>







 







<







 







<












>







 







>







 







<







 







>







 







<
<







 







>
>







 







|
|







 







<







 







<
<


>
>







 







>







 







<


>







 







>







 







>







 







>







 







<

>







 







>







 







>







 







<







 







>
>







 







<







>







 







>







 







>







 







>







 







>
>
>







 







<







 







<







 







<







>
>
>







 







>







 







>







 







>







 







<
<
<




>
>
>







 







<







 







>







 







|
|







 







>
>







 







<







 







>







 







<







 







>







 







>
>
>







 







>







 







<


>





<


>







 







<







 







>







 







|
>







 







>
>







 







>







 







<


>







 







|
|
|







 







<







>







<







>













>







 







<








>







 







>







 







<

>







 







>







 







<







 







>







 







<







 







|
|







 







>







 







>







<







 







<






>





<







 







<






|






>







 







|
|











>
>







 







>







 







>







 







>







 







>







 







<







 







<







 







<







 







>







 







>







 







>
>
>
>







 







<







 







<







 







<







 







<







 







>







 







>







 







>







 







|
|







 







>
>
>
>
>







 







>







 







>







 







>







 







|
>







1
2
3
4
5
6
7
8
....
1186
1187
1188
1189
1190
1191
1192

1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
....
1498
1499
1500
1501
1502
1503
1504

1505
1506
1507
1508
1509
1510
1511
....
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
....
1986
1987
1988
1989
1990
1991
1992

1993
1994
1995
1996
1997
1998
1999
....
2334
2335
2336
2337
2338
2339
2340
2341
2342
2343
2344
2345
2346
2347
2348
....
2394
2395
2396
2397
2398
2399
2400
2401
2402
2403
2404
2405
2406
2407
2408
....
3159
3160
3161
3162
3163
3164
3165

3166
3167
3168
3169
3170
3171
3172
....
3471
3472
3473
3474
3475
3476
3477
3478
3479
3480
3481
3482
3483
3484
3485
....
3498
3499
3500
3501
3502
3503
3504

3505
3506
3507
3508
3509
3510
3511
....
3967
3968
3969
3970
3971
3972
3973
3974
3975
3976
3977
3978
3979
3980
3981
....
4239
4240
4241
4242
4243
4244
4245

4246
4247
4248
4249
4250
4251
4252
....
4310
4311
4312
4313
4314
4315
4316
4317
4318
4319
4320
4321
4322
4323
4324
....
4419
4420
4421
4422
4423
4424
4425

4426
4427
4428
4429
4430
4431
4432
....
4479
4480
4481
4482
4483
4484
4485

4486
4487
4488
4489
4490
4491
4492
....
4679
4680
4681
4682
4683
4684
4685

4686
4687
4688
4689
4690
4691
4692
....
4887
4888
4889
4890
4891
4892
4893
4894
4895
4896
4897
4898
4899
4900
4901
....
5062
5063
5064
5065
5066
5067
5068
5069
5070
5071
5072
5073
5074
5075
5076
....
5210
5211
5212
5213
5214
5215
5216
5217
5218
5219
5220
5221
5222
5223
5224
....
5316
5317
5318
5319
5320
5321
5322

5323
5324
5325
5326
5327
5328
5329
....
5519
5520
5521
5522
5523
5524
5525

5526
5527
5528
5529
5530
5531
5532
5533
5534
5535
5536
5537
5538
5539
5540
5541
5542
5543
5544
5545
....
5759
5760
5761
5762
5763
5764
5765
5766
5767
5768
5769
5770
5771
5772
5773
....
5821
5822
5823
5824
5825
5826
5827

5828
5829
5830
5831
5832
5833
5834
....
6347
6348
6349
6350
6351
6352
6353
6354
6355
6356
6357
6358
6359
6360
6361
....
6582
6583
6584
6585
6586
6587
6588


6589
6590
6591
6592
6593
6594
6595
....
6722
6723
6724
6725
6726
6727
6728
6729
6730
6731
6732
6733
6734
6735
6736
6737
....
6858
6859
6860
6861
6862
6863
6864
6865
6866
6867
6868
6869
6870
6871
6872
6873
....
7069
7070
7071
7072
7073
7074
7075

7076
7077
7078
7079
7080
7081
7082
....
7882
7883
7884
7885
7886
7887
7888


7889
7890
7891
7892
7893
7894
7895
7896
7897
7898
7899
....
8217
8218
8219
8220
8221
8222
8223
8224
8225
8226
8227
8228
8229
8230
8231
....
8383
8384
8385
8386
8387
8388
8389

8390
8391
8392
8393
8394
8395
8396
8397
8398
8399
.....
11748
11749
11750
11751
11752
11753
11754
11755
11756
11757
11758
11759
11760
11761
11762
.....
14191
14192
14193
14194
14195
14196
14197
14198
14199
14200
14201
14202
14203
14204
14205
.....
14592
14593
14594
14595
14596
14597
14598
14599
14600
14601
14602
14603
14604
14605
14606
.....
16327
16328
16329
16330
16331
16332
16333

16334
16335
16336
16337
16338
16339
16340
16341
16342
.....
17293
17294
17295
17296
17297
17298
17299
17300
17301
17302
17303
17304
17305
17306
17307
.....
19939
19940
19941
19942
19943
19944
19945
19946
19947
19948
19949
19950
19951
19952
19953
.....
20275
20276
20277
20278
20279
20280
20281

20282
20283
20284
20285
20286
20287
20288
.....
25997
25998
25999
26000
26001
26002
26003
26004
26005
26006
26007
26008
26009
26010
26011
26012
.....
26159
26160
26161
26162
26163
26164
26165

26166
26167
26168
26169
26170
26171
26172
26173
26174
26175
26176
26177
26178
26179
26180
.....
27148
27149
27150
27151
27152
27153
27154
27155
27156
27157
27158
27159
27160
27161
27162
.....
28030
28031
28032
28033
28034
28035
28036
28037
28038
28039
28040
28041
28042
28043
28044
.....
28501
28502
28503
28504
28505
28506
28507
28508
28509
28510
28511
28512
28513
28514
28515
.....
30806
30807
30808
30809
30810
30811
30812
30813
30814
30815
30816
30817
30818
30819
30820
30821
30822
.....
33140
33141
33142
33143
33144
33145
33146

33147
33148
33149
33150
33151
33152
33153
.....
33185
33186
33187
33188
33189
33190
33191

33192
33193
33194
33195
33196
33197
33198
.....
33241
33242
33243
33244
33245
33246
33247

33248
33249
33250
33251
33252
33253
33254
33255
33256
33257
33258
33259
33260
33261
33262
33263
33264
.....
42236
42237
42238
42239
42240
42241
42242
42243
42244
42245
42246
42247
42248
42249
42250
.....
42309
42310
42311
42312
42313
42314
42315
42316
42317
42318
42319
42320
42321
42322
42323
.....
43628
43629
43630
43631
43632
43633
43634
43635
43636
43637
43638
43639
43640
43641
43642
.....
46244
46245
46246
46247
46248
46249
46250



46251
46252
46253
46254
46255
46256
46257
46258
46259
46260
46261
46262
46263
46264
.....
46609
46610
46611
46612
46613
46614
46615

46616
46617
46618
46619
46620
46621
46622
.....
47433
47434
47435
47436
47437
47438
47439
47440
47441
47442
47443
47444
47445
47446
47447
.....
48244
48245
48246
48247
48248
48249
48250
48251
48252
48253
48254
48255
48256
48257
48258
48259
.....
48582
48583
48584
48585
48586
48587
48588
48589
48590
48591
48592
48593
48594
48595
48596
48597
.....
49838
49839
49840
49841
49842
49843
49844

49845
49846
49847
49848
49849
49850
49851
.....
50029
50030
50031
50032
50033
50034
50035
50036
50037
50038
50039
50040
50041
50042
50043
.....
50370
50371
50372
50373
50374
50375
50376

50377
50378
50379
50380
50381
50382
50383
.....
50613
50614
50615
50616
50617
50618
50619
50620
50621
50622
50623
50624
50625
50626
50627
.....
53094
53095
53096
53097
53098
53099
53100
53101
53102
53103
53104
53105
53106
53107
53108
53109
53110
.....
53684
53685
53686
53687
53688
53689
53690
53691
53692
53693
53694
53695
53696
53697
53698
.....
53873
53874
53875
53876
53877
53878
53879

53880
53881
53882
53883
53884
53885
53886
53887

53888
53889
53890
53891
53892
53893
53894
53895
53896
53897
.....
53920
53921
53922
53923
53924
53925
53926

53927
53928
53929
53930
53931
53932
53933
.....
54265
54266
54267
54268
54269
54270
54271
54272
54273
54274
54275
54276
54277
54278
54279
.....
54322
54323
54324
54325
54326
54327
54328
54329
54330
54331
54332
54333
54334
54335
54336
54337
.....
54514
54515
54516
54517
54518
54519
54520
54521
54522
54523
54524
54525
54526
54527
54528
54529
.....
55449
55450
55451
55452
55453
55454
55455
55456
55457
55458
55459
55460
55461
55462
55463
.....
57205
57206
57207
57208
57209
57210
57211

57212
57213
57214
57215
57216
57217
57218
57219
57220
57221
.....
58477
58478
58479
58480
58481
58482
58483
58484
58485
58486
58487
58488
58489
58490
58491
58492
58493
.....
59262
59263
59264
59265
59266
59267
59268

59269
59270
59271
59272
59273
59274
59275
59276
59277
59278
59279
59280
59281
59282
59283

59284
59285
59286
59287
59288
59289
59290
59291
59292
59293
59294
59295
59296
59297
59298
59299
59300
59301
59302
59303
59304
59305
59306
59307
59308
59309
59310
59311
59312
.....
59475
59476
59477
59478
59479
59480
59481

59482
59483
59484
59485
59486
59487
59488
59489
59490
59491
59492
59493
59494
59495
59496
59497
.....
60677
60678
60679
60680
60681
60682
60683
60684
60685
60686
60687
60688
60689
60690
60691
.....
60733
60734
60735
60736
60737
60738
60739

60740
60741
60742
60743
60744
60745
60746
60747
60748
.....
61745
61746
61747
61748
61749
61750
61751
61752
61753
61754
61755
61756
61757
61758
61759
.....
61905
61906
61907
61908
61909
61910
61911

61912
61913
61914
61915
61916
61917
61918
.....
61950
61951
61952
61953
61954
61955
61956
61957
61958
61959
61960
61961
61962
61963
61964
.....
62854
62855
62856
62857
62858
62859
62860

62861
62862
62863
62864
62865
62866
62867
.....
62940
62941
62942
62943
62944
62945
62946
62947
62948
62949
62950
62951
62952
62953
62954
62955
.....
63096
63097
63098
63099
63100
63101
63102
63103
63104
63105
63106
63107
63108
63109
63110
.....
63212
63213
63214
63215
63216
63217
63218
63219
63220
63221
63222
63223
63224
63225
63226

63227
63228
63229
63230
63231
63232
63233
.....
63480
63481
63482
63483
63484
63485
63486

63487
63488
63489
63490
63491
63492
63493
63494
63495
63496
63497
63498

63499
63500
63501
63502
63503
63504
63505
.....
63526
63527
63528
63529
63530
63531
63532

63533
63534
63535
63536
63537
63538
63539
63540
63541
63542
63543
63544
63545
63546
63547
63548
63549
63550
63551
63552
63553
.....
63638
63639
63640
63641
63642
63643
63644
63645
63646
63647
63648
63649
63650
63651
63652
63653
63654
63655
63656
63657
63658
63659
63660
63661
63662
63663
63664
63665
63666
.....
64369
64370
64371
64372
64373
64374
64375
64376
64377
64378
64379
64380
64381
64382
64383
.....
64686
64687
64688
64689
64690
64691
64692
64693
64694
64695
64696
64697
64698
64699
64700
.....
65028
65029
65030
65031
65032
65033
65034
65035
65036
65037
65038
65039
65040
65041
65042
.....
66330
66331
66332
66333
66334
66335
66336
66337
66338
66339
66340
66341
66342
66343
66344
.....
67797
67798
67799
67800
67801
67802
67803

67804
67805
67806
67807
67808
67809
67810
.....
67838
67839
67840
67841
67842
67843
67844

67845
67846
67847
67848
67849
67850
67851
.....
67947
67948
67949
67950
67951
67952
67953

67954
67955
67956
67957
67958
67959
67960
.....
72641
72642
72643
72644
72645
72646
72647
72648
72649
72650
72651
72652
72653
72654
72655
.....
72755
72756
72757
72758
72759
72760
72761
72762
72763
72764
72765
72766
72767
72768
72769
.....
73156
73157
73158
73159
73160
73161
73162
73163
73164
73165
73166
73167
73168
73169
73170
73171
73172
73173
.....
74004
74005
74006
74007
74008
74009
74010

74011
74012
74013
74014
74015
74016
74017
.....
74062
74063
74064
74065
74066
74067
74068

74069
74070
74071
74072
74073
74074
74075
.....
74830
74831
74832
74833
74834
74835
74836

74837
74838
74839
74840
74841
74842
74843
.....
75091
75092
75093
75094
75095
75096
75097

75098
75099
75100
75101
75102
75103
75104
.....
75768
75769
75770
75771
75772
75773
75774
75775
75776
75777
75778
75779
75780
75781
75782
.....
76405
76406
76407
76408
76409
76410
76411
76412
76413
76414
76415
76416
76417
76418
76419
.....
76466
76467
76468
76469
76470
76471
76472
76473
76474
76475
76476
76477
76478
76479
76480
.....
76696
76697
76698
76699
76700
76701
76702
76703
76704
76705
76706
76707
76708
76709
76710
76711
.....
76942
76943
76944
76945
76946
76947
76948
76949
76950
76951
76952
76953
76954
76955
76956
76957
76958
76959
76960
.....
78303
78304
78305
78306
78307
78308
78309
78310
78311
78312
78313
78314
78315
78316
78317
.....
78343
78344
78345
78346
78347
78348
78349
78350
78351
78352
78353
78354
78355
78356
78357
.....
78891
78892
78893
78894
78895
78896
78897
78898
78899
78900
78901
78902
78903
78904
78905
.....
78907
78908
78909
78910
78911
78912
78913
78914
78915
78916
78917
78918
78919
78920
78921
78922
79600
&
1er/--
1ers/--
1re/--
1res/--
1ʳᵉ/--
1ʳᵉˢ/--
................................................................................
Bradley
Bradley
Brafman
Brahim
Brahma
Brahmapoutre
Brahms

Braine-le-Château
Braine-le-Comte
Braine-l'Alleud
Brakel
Brand
Brandon
Brasov
Brassac
Brasschaat
Brassica
................................................................................
Casanova
Casey
Casimir
Casimir-Perier
Caspienne
Cassandra
Cassandre

Cassidy
Cassini
Cassiopée
Castafolte
Castanet-Tolosan
Castelnaudary
Castelnau-le-Lez
................................................................................
Charybde
Chase
Chasles
Chastel-Arnaud
Château-Gontier
Château-Thierry
Châteaubriant
Châteaudouble
Châteaudun
Château-d'Œx
Château-d'Olonne
Châteauguay
Châteauneuf-du-Pape
Châteauneuf-les-Martigues
Châteaurenard
Châteauroux
Châtelain
Châtelet
................................................................................
DEUG
DFSG
DG
DGSE
DGSI
DHCP
DHEA

DJ
DM
DNS
DOM
DOM-TOM
DPTH
DREES
................................................................................
Dynkin
Dysnomie
Dʳ
Dʳˢ
Dʳᵉ
Dʳᵉˢ
Dᴏꜱꜱᴍᴀɴɴ
D'Holbach
ECS/L'D'Q'
EDF/L'D'Q'
EEPROM/L'D'Q'
EFREI/L'D'Q'
EFS/L'D'Q'
EIB/L'D'Q'
ENA/L'D'Q'
................................................................................
Eeklo/L'D'Q'
Eeyou/L'
Effinergie
Égée/L'D'Q'
Éghezée/L'D'Q'
Églantine/L'D'Q'
Égypte/L'D'
Ehlers-Danlos
Ehrenpreis/L'D'Q'
Ehresmann/L'D'Q'
Eibit/||--
Eiffel/L'D'Q'
Eileen/L'D'Q'
Eilenberg/L'D'Q'
Eilleen/L'D'Q'
................................................................................
Goebbels
Goëmar
Goethe
Gogh
Gogol
Golan
Goldbach

Goldoni
Golgi
Golgotha
Goliath
Gomorrhe
Goncourt
Gondwana
................................................................................
Helvétie/L'D'
Hem/L'D'Q'
Hemiksem/L'D'Q'
Hemingway/L'D'Q'
Henan
Hénault
Hendaye/L'D'Q'
Hendrik/L'D'Q'
Hénin-Beaumont/L'D'Q'
Hennebont/L'D'Q'
Hénoch/L'D'Q'
Henri/L'D'Q'
Henriette/L'D'Q'
Henrique/L'D'Q'
Henry
................................................................................
Héricourt/L'D'Q'
Hermann/L'D'Q'
Hermès/L'D'Q'
Hermine/L'D'Q'
Hermione/L'D'Q'
Hermite/L'D'Q'
Hernando/L'D'Q'

Hérode/L'D'Q'
Hérodote/L'D'Q'
Hérouville-Saint-Clair/L'D'Q'
Herschel/L'D'Q'
Herselt/L'D'Q'
Herstal
Hertz
................................................................................
Joanna
Joannie
Joaquim
Jocelyn
Jocelyne
Joconde
Jocrisse
Jodhpur
Jodie
Jodoigne
Jody
Joe
Joël
Joëlle
Joey
................................................................................
Kjeldahl
Klaus
Klee
Klein
Klimt
Klitzing
Klondike

Knokke-Heist
Knossos
Ko/||--
Kobe
Koch
Kodaira
Koekelberg
................................................................................
Kuurne
Kyle
Kylian
Kylie
Kyllian
Kyoto
Kyushu
K'nex
L/U.||--
LCD
LED
LGBT
LGBTI
LGBTIQ
LGV
................................................................................
Laval
Lavaur
Laveran
Lavoisier
Lawrence
Laxou
Lazare

Léa
Leah
Léandre
Léane
Lebbeke
Lebesgue
Lebrun
................................................................................
Léonore
Léontine
Léopold
Léopoldine
Leopoldt
Léopoldville
Leroy

Lesage
Lesbos
Lesieur
Lesley
Leslie
Lesneven
Lesotho
................................................................................
Louvain-la-Neuve
Louvière
Louviers
Louvre
Love
Lovecraft
Lovelace

Lovćen
Loyola
Loyre
Lozère
Luanda
Lubbeek
Lübeck
................................................................................
Mammon
Manach
Managua
Manama
Manaus
Manche
Manchester
Mandalay
Mandchourie
Mandela
Mandelbrot
Mandelieu-la-Napoule
Mandor
Mandy
Manet
................................................................................
Maslow
Mason
Massachusetts
Masséna
Massenet
Massimo
Massy
MasterCard
Masutti
Matchstick
Mateo
Mathéo
Matheron
Matheson
Mathias
................................................................................
Mérimée
Merkel
Merleau-Ponty
Merlin
Méru
Meryl
Mésie
Mésoamérique
Mésopotamie
Messaline
Messer
Messine
Météo-France
Mettet
Metz
................................................................................
Mithra
Mithridate
Mitnick
Mitry-Mory
Mitsubishi
Mittelhausbergen
Mitterrand

Miyabi
Mlle/S.
Mme/S.
Mnémosyne
Mo/||--
Moab
Möbius
................................................................................
Mᵍʳˢ
Mᵐᵉ
Mᵐᵉˢ
N/U.||--
NASA
NDLR
NDT

NEC
NF
NIRS
NSA
Nabil
Nabuchodonosor
Nacira
Nadège
Nadia
Nadim
Nadine
Nadir
Nadja
Nagasaki
Nagata
Nagoya
Nagy
Nahum
Naimark
Nairobi
................................................................................
Nusselt
Nuuk
Nvidia
Nyarlathotep
Nyons
Nyquist
Nyx
N'Djamena
OCDE/L'D'Q'
OCaml/L'D'Q'
ODF/L'D'Q'
Œdipe/L'D'Q'
OFBiz/D'Q'
OFCE/L'D'Q'
OGM/L'D'Q'
................................................................................
Oignies/L'D'Q'
Oisans/L'
Oise/L'
Oissel/L'D'Q'
Oklahoma/L'D'
Olaf/L'D'Q'
Oldham/L'D'Q'

Oleg/L'D'Q'
Olen/L'D'Q'
Oléron/L'D'Q'
Olga/L'D'Q'
Oliver/L'D'Q'
Olivet/L'D'Q'
Olivia/L'D'Q'
................................................................................
Pullman
Pune
Purcell
Puteaux
Puurs
Puy-de-Dôme
Puy-en-Velay
Pygmalion
Pyongyang
Pyrénées
Pyrénées-Atlantiques
Pyrénées-Orientales
Pyrrha
Pyrrhus
Pythagore
................................................................................
Rivery
Riviera
Rivière-Pilote
Rivière-Salée
Rixensart
Rixheim
Riyad


Roanne
Rob
Robert
Roberta
Roberte
Roberto
Roberval
................................................................................
Ruth
Rutherford
Rutishauser
Rwanda
Ryan
Ryanair
Ryxeo
R'lyeh
R'n'B
S/U.||--
SA
SADT
SAP
SARL
SCIC
SCOT
................................................................................
Saint-Louis
Saint-Malo
Saint-Mandé
Saint-Marin
Saint-Martin
Saint-Martin-Boulogne
Saint-Martin-Petit
Saint-Martin-de-Crau
Saint-Martin-d'Hères
Saint-Maur-des-Fossés
Saint-Maurice
Saint-Max
Saint-Maximin-la-Sainte-Baume
Saint-Médard-en-Jalles
Saint-Michel-de-Feins
Saint-Michel-sur-Orge
................................................................................
Schwerin
Schwytz
Schwyz
Scipion
Scott
Scoville
Scrameustache/S.

Scudéry
Scylla
SeaMonkey
Seagate
Seamus
Sean
Seat
................................................................................
Vachez
Vadim
Vaduz
Vahan
Vaires-sur-Marne
Valais
Valbonne


Val-de-Marne
Val-de-Reuil
Val-d'Oise
Val-d'Or
Valence
Valenciennes
Valentigney
Valentin
Valentina
Valentine
Valentinien
................................................................................
Wilfred
Wilfrid
Wilfried
Wilhelm
Will
Willa
Willebroek
Willem
William
Williams
Willie
Willy
Wilma
Wilson
Windhoek
................................................................................
Xavière
Xe/--
Xebia
Xenia
Xénophane
Xénophon
Xerxès

Xining
Xinjiang
Xi'an
Xᵉ/--
YHWH
Yacine
Yaël
Yaëlle
Yahvé
Yahweh
................................................................................
annulable/S*
annulaire/S*
annulation/S*
annulative/F*
annulatrice/F*
annulement/S*
annuler/a4p+
annulingus/L'D'Q'
annuus
anoblir/f4p+
anoblissante/F*
anoblissement/S*
anode/S*
anodine/F*
anodique/S*
................................................................................
autoclave/S*
autoclave/S*
autoclaviste/S*
autocollante/F*
autocommutateur/S*
autocompenser/a4p+
auto-compenser/a4p+
autocomplétion/S*
autoconcurrence/S*
autoconditionnement/S*
auto-conditionnement/S*
autoconduction/S*
autoconservation/S*
autoconsommation/S*
autoconstruction/S*
................................................................................
avionneuse/F*
avions-cargos/D'Q'
avipelvien/S*
aviron/S*
avirulence/S*
avis/L'D'Q'
aviser/a4p+
aviseur/S*
aviso/S*
avitaillement/S*
avitailler/a4p+
avitailleuse/F*
avitaminose/S*
avivage/S*
avivement/S*
................................................................................
binocle/S.
binoculaire/S.
binodale/S.
binôme/S.
binomiale/W.
binominale/W.
binouze/S.

bintje/S.
bin's
bio
bio/S.
bioabsorbable/S.
bioaccumulable/S.
bioaccumulation/S.
bioacoustique/S.
bioagresseur/S.
................................................................................
boui-boui
bouif/S.
bouillabaisse/S.
bouillage/S.
bouillante/F.
bouillasse/S.
bouille/S.
bouillette/S.
bouilleuse/F.
bouillie/S.
bouillir/iQ
bouillissage/S.
bouilloire/S.
bouillon/S.
bouillonnante/F.
................................................................................
caviarder/a0p+
cavicorne/S.
caviste/S.
cavitaire/S.
cavitation/S.
cavité/S.
cd/U.||--
ce
ce
céans
cébette/S.
cébiste/S.
ceci
cécidie/S.
cécidomyie/S.
................................................................................
cesse
cesser/a0p+
cessez-le-feu
cessibilité/S.
cessible/S.
cession/S.
cessionnaire/S.

ceste/S.
cestode/S.
césure/S.
cet
cétacé/S.
cétane/S.
céteau/X.
................................................................................
cytosolique/S.
cytosquelette/S.
cytostatique/S.
cytotoxicité/S.
cytotoxique/S.
czardas
czimbalum/S.
c'
c'est-à-dire
d
d/||--
dB/||--
daba/S.
dacite/S.
dacryoadénite/S.
dacryocystite/S.
................................................................................
datte/S.
dattier/S.
datura/S.
daube/S.
dauber/a0p+
daubeuse/F.
daubière/S.

dauphine/F.
dauphinelle/S.
dauphinoise/F.
daurade/S.
davantage
davier/S.
dazibao/S.
de
de
dé/S.
déactiver/a0p+
deal/S.
dealer/S.
dealer/a0p+
déambulateur/S.
................................................................................
dégeler/b0p+
dégénération/S.
dégénérative/F.
dégénérée/F.
dégénérer/c0p+
dégénérescence/S.
dégénérescente/F.
dégenrer/a0p+
dégerbage/S.
dégermage/S.
dégermer/a0p+
dégingander/a0p+
dégîter/a0p+
dégivrage/S.
dégivrante/F.
................................................................................
dépoitrailler/a0p+
dépolarisation/S.
dépolariser/a0p+
dépolir/f0p+
dépolissage/S.
dépolitisation/S.
dépolitiser/a0p+
dépolluante/F.
dépolluer/a0p+
dépollution/S.
dépolymérisation/S.
dépolymériser/a0p+
déponente/F.
dépontiller/a0p.
dépopulation/S.
................................................................................
désensibilisation/S.
désensibiliser/a0p+
désensorceler/d0p+
désentoilage/S.
désentoiler/a0p+
désentortiller/a0p+
désentraver/a0p+
désentrelacer/a4p+
désenvasement/S.
désenvaser/a0p+
désenvelopper/a0p+
désenvenimer/a0p+
désenverguer/a0p+
désenvoûtement/S.
désenvoûter/a0p+
................................................................................
dystrophine/S.
dystrophique/S.
dystrophisation/S.
dysurie/S.
dysurique/S.
dytique/S.
dzêta
d'
d'
d'aucuns
e
eV/U.||--
eau/X*
eau-de-vie/L'D'Q'
eau-forte/L'D'Q'
eaux-de-vie/D'Q'
eaux-fortes/D'Q'
................................................................................
entraccorder/a6p+
entraccuser/a6p+
entracte/S*
entradmirer/a6p+
entraide/S*
entraider/a6p+
entrailles/D'Q'

entrain/S*
entraînable/S*
entraînante/F*
entraînement/S*
entraîner/a4p+
entraîneuse/F*
entrait/S*
................................................................................
entrées-sorties
entrefaite/S*
entrefaites
entrefer/S*
entrefilet/S*
entre-frapper/a6p+
entregent/S*

entre-haïr/fB
entre-heurter/a6p+
entrejambe/S*
entrelacement/S*
entrelacer/a4p+
entrelacs/L'D'Q'
entrelarder/a2p+
................................................................................
entretoiser/a2p+
entre-tuer/a6p+
entrevoie/S*
entrevoir/pF
entrevous/L'D'Q'
entrevoûter/a2p+
entrevue/S*

entrisme/S*
entropie/S*
entropion/S*
entropique/S*
entroque/S*
entrouvrir/iC
entrure/S*
entr'aimer/a6p+
entr'égorger/a6p+
entr'hiverner
entuber/a2p+
enturbanner/a4p+
enture/S*
énucléation/S*
énucléer/a2p+
énumérabilité/S*
énumérable/S*
................................................................................
hypercentre/S*
hyperchimie/S*
hyperchlorhydrie/S*
hypercholestérolémie/S*
hypercholestérolémique/S*
hyperchrome/S*
hyperchromie/S*
hypercoagulabilité/S*
hypercomplexe/S*
hyperconformisme/S*
hyperconnectée/F*
hypercontinentale/W*
hypercontrôle/S*
hypercorrecte/F*
hypercorrection/S*
................................................................................
hypernova/L'D'Q'
hypernovæ/D'Q'
hypéron/S*
hyperonyme/S*
hyperonymie/S*
hyperonymique/S*
hyperostose/S*
hyperoxie/S*
hyperparasite/S*
hyperparathyroïdie/S*
hyperphagie/S*
hyperphagique/S*
hyperphalangie/S*
hyperplan/S*
hyperplasie/S*
................................................................................
incrémentale/W*
incrémentalement/D'Q'
incrémentation/S*
incrémenter/a2p+
incrémentielle/F*
increvable/S*
incriminable/S*
incriminante/F*
incrimination/S*
incriminer/a4p+
incristallisable/S*
incritiquable/S*
incrochetable/S*
incroyable/S*
incroyablement/D'Q'
................................................................................
juron/S.
jury/S.
jus
jusant/S.
jusée/S.
jusnaturalisme/S.
jusnaturaliste/S.



jusque
jusque-là
jusques
jusquiame/S.
jusqu'/--
jusqu'au-boutisme/S.
jusqu'au-boutiste/S.
jussiée/S.
jussion/S.
justaucorps
juste
juste/S.
juste-à-temps
justement
................................................................................
kyrie
kyrielle/S.
kyriologique/S.
kyste/S.
kystique/S.
kyu/S.
kyudo/S.

l
l/U.||--
là
la
la
la
labadens
................................................................................
leude/S.
leur
leur
leur/S.
leurre/S.
leurrer/a0p+
leurs
leurszigues
lev/S.
levage/S.
levageuse/F.
levain/S.
levalloisien/S.
levalloisienne/F.
lévamisole/S.
................................................................................
loricaire/S.
lorientaise/F.
loriot/S.
loriquet/S.
lorraine/F.
lorry/A.
lors
lorsque
lorsqu'/--
losange/S.
losangée/F.
losangique/S.
loser/S.
lot/S.
loterie/S.
lotier/S.
................................................................................
lysogénie/S.
lysogénique/S.
lysosomale/W.
lysosome/S.
lysosomiale/W.
lysozyme/S.
lytique/S.
l'
l'
m
m/U.||--
mCE
mR/||--
ma
maar/S.
maboule/F.
................................................................................
mastopathie/S.
mastose/S.
mastroquet/S.
masturbation/S.
masturbatoire/S.
masturbatrice/F.
masturber/a0p+

masure/S.
masurium/S.
mât/S.
matabiche/S.
matabicher/a