summaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2022-12-14test: fix test binary build for Windows.Jehan1-2/+9
realpath() doesn't exist on Windows. Replace it with _fullpath() which does the same thing, as far as I can see (at least for creating an absolute path, it doesn't seem to canonicalize the path, or the docs doesn't say it, yet since we are controlling the arguments from our CMake script, it's not a big problem anyway). This fixed the CI build for Windows failing with: > undefined reference to `realpath'
2022-12-14src: reset shortcut charset/language on Reset().Jehan1-0/+8
Failing to do so, we always return the same language once we detected a shortcut one, even after resetting. For instance, the issue happened on the uchardet CLI tool.
2022-12-14src: do not test with nsLatin1Prober anymore.Jehan1-2/+9
Just commenting it out for now. This is just not good enough and could take over detection when other probers have low confidence (yet reasonable ones), returning an ugly WINDOWS-1252 with no language detection. I think we should even just get rid of it completely. For now, I temporarily uncomment it and will see with further experiments.
2022-12-14src: improve confidence computation (generic and single-byte charset).Jehan3-26/+31
Nearly the same algorithm on both pieces of code now. I reintroduced the mTypicalPositiveRatio now that our models actually gives the right ratio (not the "first 512" meaningless stuff anymore). In remaining differences, the last computation is the ratio of frequent characters on the whole characters. For the generic detector, we use the frequent+out sum instead. It works much better. I think that Unicode text is much more prone to have characters outside your expected range, while still being meaningful characters. Even control characters are much more meaningful in Unicode. So a ratio off it would make much too low confidence. Anyway this confidence algorithm is already better. We seem to approach much nicer confidence at each iteration, very satisfying!
2022-12-14script: generate more complete frequent characters when range is set.Jehan1-19/+16
The early version used to stop earlier, assuming frequent ranges were used only for language scripts with a lot of characters (such as Korean, or even more Japanese or Chinese), hence it was not efficient to keep data for them all. Since we now use a separate language detector for CJK, remaining scripts (so far) have a usable range of characters. Therefore it is much prefered to keep as much data as possible on these. This allowed to redo the Thai model (cf. previous commit) with more data, hence get much better language confidence on Thai texts.
2022-12-14script, src: regenerate the Thai model.Jehan3-288/+325
With all the changes we made, regenerate the Thai model which is of poor quality. This new one is much better.
2022-12-14src, script: fix the order of characters for Vietnamese.Jehan2-376/+356
Cf. commit 872294d.
2022-12-14src, script: add concept of alphabet_mapping in language models.Jehan4-237/+192
This allows to handle cases where some characters are actually alternative/variants of another. For instance, a same word can be written with both variants, while both are considered correct and equivalent. Browsing a bit Slovenian Wikipedia, it looks like they only use them for titles there. I use this the first time on characters with diacritics in Slovene. Indeed these are so rarely used that they would hardly show in the stats and worse, any sequence using these in tested text would likely show as negative sequences hence drop the confidence in Slovenian. As a consequence, various Slovene text would show up as Slovak as it's close enough and contains the same character with diacritics in a common way.
2022-12-14script: regenerate Slovak and Slovene with better alphabet support.Jehan6-558/+587
I was missing some characters, especially in the Slovak alphabet. Oppositely the Slovene alphabet does not use 4 of the common ASCII alphabet.
2022-12-14script: fix a stupid bug making same ratio for all frequent characters.Jehan1-1/+1
Argh! How did I miss this!
2022-12-14script, src: regenerate the Vietnamese model.Jehan3-229/+383
The alphabet was not complete and thus confidence was a bit too low. For instance the VISCII test case's confidence bumped from 0.643401 to 0.696346 and the UTF-8 test case bumped from 0.863777 to 0.99. Only the Windows-1258 test case is slightly worse from 0.532846 to 0.532098. But the overwhole recognition gain is obvious anyway.
2022-12-14src: fix negative confidence wrapping around because of unsigned int.Jehan1-1/+1
In extreme case of more mCtrlChar than mTotalChar (since the later does not include control characters), we end up with a negative value, which in unsigned int becomes a huge integer. So because the confidence was so bad that it would be negative, we ended up in a huge confidence. We had this case with our Japanese UTF-8 test file which ended up identified as French ISO-8859-1. So I just cast the uint to float early on in order to avoid such pitfall. Now all our test cases succeed again, this time with full UTF-8+language support! Wouhou!
2022-12-14script, src: remove generated statistics data for Korean.Jehan5-1315/+2
2022-12-14src: new nsCJKDetector specifically Chinese/Japanese/Korean recognition.Jehan4-1/+313
I was pondering improving the logics of the LanguageModel contents, in order to better handle language with a huge number of characters (far too much to keep a full frequent list while keeping reasonable memory consumption and speed). But then I realize that this happens for languages which have anyway their own set of characters. For instance, modern Korean is near full hangul. Of course, we can find some Chinese characters here and there, but nothing which should really break confidence if we base it on the hangul ratio. Of course if some day we want to go further and detect older Korean, we will have to improve the logics a bit with some statistics, though I wonder if limiting ourselves to character frequency is not enough here (sequence frequency is maybe a bit overboard). To be tested. In any case, this new class gives much more relevant confidence on Korean texts, compared to the statistics data we previously generated. For Japanese, it is a mix of kana and Chinese characters. A modern full text cannot exist without a lot of kanas (probably only old text or very short texts, such as titles, could have only Chinese characters). We would still want to add a bit of statistics to differentiate correctly a Japanese text with a lot of Chinese characters in it and a Chinese text which quotes a bit of Japanese phrases. It will have to be improved, but for now it works fairly ok. A last case where we would want to play with statistics might be if we want to differentiate between regional variants. For instance, Simplified Chinese, Taiwan or Hong Kong Chinese… More to experiment later on. It's already a first good step for UTF-8 support with language!
2022-12-14README: fix a duplicate.Jehan1-1/+1
2022-12-14Update README.Jehan1-20/+105
2022-12-14src: consider any combination with a non-frequent character as sequence.Jehan1-0/+10
Basically since we excluse non-letters (Control chars, punctuations, spaces, separators, emoticones and whatnot), we consider any remaining character as an off-script letter (we may have forgotten some cases, but so far, it looks promising). Hence it is normal to consider a combination with these (i.e. 2 off-script letters or 1 frequent letter + 1 off-script, in any order) as a sequence too. Doing so will drop the confidence even more of any text having too much of these. As a consequence, it expands again the gap between the first and second contender, which seems to really show it works.
2022-12-14src: add Hindi/UTF-8 support.Jehan8-2/+501
2022-12-14src: improve confidence computation.Jehan2-26/+108
Detect various blocks of characters for punctuation, symbols, emoticons and whatnot. These are considered kind of neutral in the confidence (because it's normal to have punctuation, and various text nowadays are expected to display emoticones or various symbols). What is of interest is all the rest, which will then consider as out-of-range characters (likely characters for other scripts) and will therefore drop the confidence. Now confidence will therefore take into account the ratio of all in-range characters (script letters + various neutral characters) and the ratio of frequent letters within all letters (script letters + out-of-range characters). This improved algorithm makes for much more efficient detection, as it bumped most confidence in all our unit test, and usually increased the gap between the first and second contender.
2022-12-14script: fix a bit BuildLangModel.py when use_ascii is True.Jehan1-3/+8
In particular, I prepare the case for English detection. I am not pushing actual English models yet, because it's not so efficient yet. I will do when I will be able to handle better English confidence.
2022-12-14script, src: add generic Korean model.Jehan8-41/+2223
Until now, Korean charsets had its own probers as there are no single-byte encoding for writing Korean. I now added a Korean model only for the generic character and sequence statistics. I also improved the generation script (script/BuildLangModel.py) to allow for languages without single-byte charset generation and to provide meaningful statistics even when the language script has a lot of characters (so we can't have a full sequence combination array, just too much data). It's not perfect yet. For instance our UTF-8 Korean test file ends up with confidence of 0.38503, which is low for obvious Korean text. Still it works (correctly detected, with top confidence compared to others) and is a first step toward more improvement for detection confidence.
2022-12-14src, test: fix the new Johab prober and add a test.Jehan4-8/+15
This prober comes from MR !1 on the main branch though it was too agressive then and could not get merged. On the improved API branch, it doesn't detect other tests as Johab anymore. Also fixing it to work with the new API. Finally adding a Johab/ko unit test.
2022-12-14src: build new charset prober for Johab Korean.Jehan6-6/+8
CMake build was not completed and enum state nsSMState disappeared in commit 53f7ad0. Also fixing a few coding style bugs. See discussion in MR !1.
2022-12-14add charset prober for Johab KoreanLSY9-2/+1029
2022-12-14script, src: generate the Hebrew models.Jehan10-172/+642
The Hebrew Model had never been regenerated by my scripts. I now added the base generation files. Note that I added 2 charsets: ISO-8859-8 and WINDOWS-1255 but they are nearly identical. One of the difference is that the generic currency sign is replaced by the sheqel sign (Israel currency) in Windows-1255. And though this one lost the "double low line", apparently some Yiddish characters were added. Basically it looks like most Hebrew text would work fine with the same confidence on both charsets and detecting both is likely irrelevant. So I keep the charset file for ISO-8859-8, but won't actually use it. The good part is now that Hebrew is also recognized in UTF-8 text thanks to the new code and newly generated language model.
2022-12-14test: 4 new tests for UTF-8.Jehan4-0/+8
Taken from random pages for each of these languages. I now have a test for every 26 supported couple of (UTF-8, language). These are all working very fine and detected at the right encoding and language.
2022-12-14src: drop the SURE_YES confidence for character distribution probers.Jehan1-1/+1
Some probers are based on character distribution analysis. Though it is still relevant detection logics, we also know that it is a lot less subtle than sequence distribution. Therefore let's give a good confidence for a text passing such analysis, yet not a near perfect one, thus leaving some chance for other probers. In particular, we can definitely consider that if some text gets over 0.7 on sequence distribution analysis, this is a very likely candidate. I had the case with the Finnish UTF-8 test which was passing (UTF-8, Finnish) detection with a staggering 0.86 confidence, yet was overrided by UHC (EUC-KR). This used to not be a problem when nsMBCSGroupProber would check the UTF-8 prober first and stop there with just some basic encoding detection. Now that we go further and return all relevant candidates, some simpler detection algorithm which always return too-good confidence is not the best idea.
2022-12-14src: do not shortcut UTF-8 detection too early.Jehan1-1/+3
I had the case with the Czech test which was considered as Irish after being shortcutted far too early after only 16 characters. Confidence values was just barely above 0.5 for Irish (and barely below for Czech). By adding a threshold (at least 256 characters), we give a bit of relevant data to the engine to actually make an informed decision. By then, the Czech detection was at more than 0.7, whereas the Irish one at 0.6.
2022-12-14src: nsEscCharsetProber also returns the correct language.Jehan6-6/+21
nsEscCharsetProber will still only return a single candidate, because this is detected by a state machine, not language statistics anyway. Anyway now it will also return the language attached to the encoding.
2022-12-14src: make nsMBCSGroupProber report all valid candidates.Jehan4-99/+202
Returning only the best one has limits, as it doesn't allow to check very close confidence candidates. Now in particular, the UTF-8 prober will return all ("UTF-8", lang) candidates for every language with probable statistical fit.
2022-12-14src: allow for nsCharSetProber to return several candidates.Jehan27-96/+110
No functional change yet because all probers still return 1 candidate. Yet now we add a GetCandidates() method to return a number of candidates. GetCharSetName(), GetLanguage() and GetConfidence() now take a parameter which is the candidate index (which must be below the return value of GetCandidates()). We can now consider that nsCharSetProber computes a couple (charset, language) and that the confidence is for this specific couple, not just the confidence for charset detection.
2022-12-14src: nsMBCSGroupProber confidence weighed by language confidence.Jehan1-2/+16
Since our whole charset detection logics is based on text having meaning (using actual language statistics), just because a text is valid UTF-8 does not mean it is absolutely the right encoding. It may also fit other encoding with maybe very high statistical confidence (and therefore a better candidate). Therefore instead of just returning 0.99 or other high values, let's weigh our encoding confidence with the best language confidence.
2022-12-14src: tweak again the language detection confidence.Jehan1-13/+9
Computing a logical number of sequence was a big mistake. In particular, a language with only positive sequence would have the same score as a language with a mix of only positive and probable sequence (i.e. 1.0). Instead, just use the real number of sequence, but probable of sequence don't bring +1 to the numerator. Also drop the mTypicalPositiveRatio, at least for now. In my tests, it mostly made results worse. Maybe this would still make sense for language with a huge number of characters (like CJK languages), for which we won't have the full list of characters in our "frequent" list of characters. Yet for most other languages, we actually list all the possible sequences within the character set, therefore any sequence out of our sequence list should necessarily drop confidence. Tweaking the result backup up with some ratio is therefore counter-productive. As for CJK cases, we'll see how to handle the much higher number of sequences (too many to list them all) when we get there.
2022-12-14test: update unit test to check detected languages.Jehan1-23/+43
Excepting ASCII, UTF-16 and UTF-32 for which we don't detect languages yet.
2022-12-14src: reset language detectors when resetting a nsMBCSGroupProber.Jehan1-0/+6
2022-12-14src, script: regenerate all existing language models.Jehan43-4708/+5426
Now making sure that we have a generic language model working with UTF-8 for all 26 supported models which had single-byte encoding support until now.
2022-12-14Using the generic language detector in UTF-8 detection.Jehan29-42/+234
Now the UTF-8 prober would not only detect valid UTF-8, but would also detect the most probable language. Using the data generated 2 commits away, this works very well. This is still basic and will require even more improvements. In particular, now the nsUTF8Prober should return an array of ("UTF-8", language) couple candidate. And nsMBCSGroupProber should itself forward these candidates as well as other candidates from other multi-byte detectors. This way, the public-facing API would get more probable candidates, in case the algorithm is slightly wrong. Also the UTF-8 confidence is currently stupidly high as soon as we consider it to be right. We should likely weigh it with language detection (in particular, if no language is detected, this should severely weigh down UTF-8 detection; not to 0, but high enough to be a fallback in case no other encoding+lang is valid and low enough to give chances to other good candidate couples.
2022-12-14New generic language detector class.Jehan3-0/+300
It detects languages similarly to the single byte encoding detector algorithm, based on character frequency and sequence frequency, except it does it generically from unicode codepoint, not caring at all about the original encoding. The confidence algorithm for language is very similar to the confidence algorithm for encoding+language in nsSBCharSetProber, though I tweaked it a little making it more trustworthy. And I plan to tweak it even a bit more later, as I improve progressively the detection logics with some of the idea I had.
2022-12-14Rebuild a bunch of language models.Jehan14-1401/+1617
Adding generic language model (see coming commit), which uses the same data as specific single-byte encoding statistics model, except that it applies it to unicode code points. For this to work, instead of the CharToOrderMap which was mapping directly from encoded byte (always 256 values) to order, now we add an array of frequent characters, ordered by generic unicode code points to the order of frequency (which can be used on the same sequence mapping array). This of course means that each prober where we will want to use these generic models will have to implement their own byte to code point decoder, as this is per-encoding logics anyway. This will come in a subsequent commit.
2022-12-14src: add a --weight option to the CLI tool.Jehan1-13/+72
Syntax is: lang1:weight1,lang2:weight2… For instance: `uchardet -wfr:1.1,it:1.05 file.txt` if you think a file is probably French or maybe Italian.
2022-12-14src: new weight concept in the C API.Jehan3-4/+86
Pretty basic, you can weight prefered language and this will impact the result. Say the algorithm "hesitates" between encoding E1 in language L1 and encoding E2 in language L2. By setting L2 with a 1.1 weight, for instance because this is the OS language, or usual prefered language, you may help the algorithm to overcome very tight cases. It can also be helpful when you already know for sure the language of a document, you just don't know its encoding. Then you may set a very high value for this language, or simply set a default value of 0, and set 1 for this language. Only relevant encoding will be taken into account. This is still limited though as generic encoding are still implemented language-agnostic. UTF-8 for instance would be disadvantaged by this weight system until we make it language-aware.
2022-12-14src: fix the usage of `uchardet` tool.Jehan1-1/+1
It was displaying -v for both verbose and version options. The new --verbose short option is actually -V (uppercase).
2022-12-14src: `uchardet` tool now shows the language code in verbose mode.Jehan1-3/+9
2022-12-14script: update BuildLangModel.py to updated SequenceModel struct.Jehan1-1/+2
In particular, there is now a language code member.
2022-12-14src: new API to get the detected language.Jehan51-104/+276
This doesn't work for all probers yet, in particular not for the most generic probers (such as UTF-8) or WINDOWS-1252. These will return NULL. It's still a good first step. Right now, it returns the 2-character language code from ISO 639-1. A using project could easily get the English language name from the XML/json files provided by the iso-codes project. This project will also allow to easily localize the language name in other languages through gettext (this is what we do in GIMP for instance). I don't add any dependency though and leave it to downstream projects to implement this. I was also wondering if we want to support region information for cases when it would make sense. I especially wondered about it for Chinese encodings as some of them seem quite specific to a region (according to Wikipedia at least). For the time being though, these just return "zh". We'll see later if it makes sense to be more accurate (maybe depending on reports?).
2022-12-14test: fix test script to use the new API and get rid of build warning.Jehan1-1/+1
2022-12-14src: new option --verbose|-V in the `uchardet` CLI tool.Jehan1-10/+38
This new option will give the whole candidate list as well as their respective confidence (ordered by higher to lower).
2022-12-14src: new API to get all candidates and their confidence.Jehan3-3/+51
Adding: - uchardet_get_candidates() - uchardet_get_encoding() - uchardet_get_confidence() Also deprecating uchardet_get_charset() to have developers look at the new API instead. I was unsure if this should really get deprecated as it makes the basic case simple, but the new API is just as easy anyway. You can also directly call uchardet_get_encoding() with candidate 0 (same as uchardet_get_charset(), it would then return "" when no candidate was found).
2022-12-14src: now reporting encoding+confidence and keeping a list.Jehan3-26/+62
Preparing for an updated API which will also allow to loop at the confidence value, as well as get the list of possible candidate (i.e. all detected encoding which had a confidence value high enough so that we would even consider them). It is still only internal logics though.
2022-12-08README, doc: some README and release procedure updates.Jehan2-9/+13