5

While using IE autocorrect "naive" got transformed to "naïve"! My regional settings are Au English, from a Unicode search point of view the two are nothing alike. I am not even sure whether there are other characters that are being replaced, but from a programmers point of view what are the consequences and remedies in such cases? In SQL is there a way to return results containing both "naïve" and "naive"? (without hardcoding or explicit programming of course).

Are there other instances of automatic transliteration that one should be aware of? Or is there a method to handle such sameness of characters in other languages? e.g. umlats in German and lack/inclusion of vowels in Arabic script based words.

Thanks for any observations on this matter

jimjim
  • 873
  • 8
  • 12
  • 1
    The question is very broad and vague. It is not clear how “IE autocorrect” (whatever it means) relates to the issue, which seems to be generally about alternate spellings of words – an “autocorrect” may replace one spelling by another, but mostly the variation is in original data. – Jukka K. Korpela Nov 25 '13 at 13:33
  • @JukkaK.Korpela IE == Internet Explorer. This is a web development question. – Izkata Nov 25 '13 at 17:04
  • We keep running into an issue where dashes get auto-converted to em- or en-dashes. When users try to cut and paste something like `grep -f foo` they get exciting errors ("file not found -f") – Alex Feinman Dec 02 '13 at 14:37
  • @JukkaK.Korpela : The autocorrect is not really relevant I guess, but the result of the autocorrect was most likely being stored in a DB I have to assume, at the time I was commenting or answering something on a web site. – jimjim Oct 17 '16 at 23:03

3 Answers3

9

In general, if people misspell words when entering data, there can be challenges as to querying for the data. In the specific example, you can find the data if you explicitly use an 'accent insensitive' collation.

if 'naive' = 'naïve' collate Latin1_General_CI_AS
    print 'TRUE'
else
    print 'FALSE'

Prints out FALSE

if 'naive' = 'naïve' collate Latin1_General_CI_AI
    print 'TRUE'
else
    print 'FALSE'

Prints out TRUE

p.s. The SQL syntax is MS SQL Server syntax. I don't know if or how this applies to other relational databases

Pete
  • 8,916
  • 3
  • 41
  • 53
3

As Pete's answer points out, you can address this using "accent-insensitive" collation rules.

But I think that's only a band-aid.

You could say that 'naive' and 'naïve' are really just alternative (arguably correct) spellings of the same word. In the same ilk, what about 'colour' versus 'color'? And what about accidental ... or deliberate ... misspellings and typing errors?

The big picture here is that if you are going to pick apart text produced (at some point) by humans, your code has to deal with all sorts of variability.

If there is a general technique for solving this, it is probably to normalize your text to standard spellings and orthography (broadly defined) before you store it in the database. And consider using a free-text search engine rather than a regular database.

Stephen C
  • 25,180
  • 6
  • 64
  • 87
3

Typography is another frequent source of character substitution. Certain two-letter combinations may be replaced by ligatures during printing (e.g. ae might become æ), fractions expressed in text might be replaced with single characters, and symbols that are often mimicked in text may be replaced with dedicated characters (e.g. (c) becomes ©).

Such transformations are often performed when documents are being prepared for printing, but printing doesn't necessarily mean toner on paper. PDF files are typically produced by a printing-like process, so if PDF files are an input to your process, you may need to consider alternative forms for your data. For example, if you're searching for data in a PDF file, you need to be aware of the ways that text can change.

One way to manage this sort of thing is to convert all data to a canonical form before using it. Choose a form that makes sense for your project. You might decide to exclude ligatures but keep accents and symbols where possible. Then create a set of filters that implement your rules and use them to canonicalize all data coming into the system. This lets you do a single search for words like naïve instead of having to search for both naïve and naive.

The other way to manage it is to use unmodified data and try to deal with all the possible variations that might occur. If you don't have control over the data (e.g. you're searching through data on someone else's server) this may be the only reasonable option. Regular expressions can help here -- you can search for na[iï]ve.

Caleb
  • 38,959
  • 8
  • 94
  • 152