简体   繁体   中英

How to use UTF-8 in C code?

My setup: gcc-4.9.2, UTF-8 environment.

The following C-program works in ASCII, but does not in UTF-8.

Create input file:

echo -n 'привет мир' > /tmp/вход

This is test.c:

#include <stdio.h>
#include <stdlib.h>
#include <string.h>

#define SIZE 10

int main(void)
{
  char buf[SIZE+1];
  char *pat = "привет мир";
  char str[SIZE+2];

  FILE *f1;
  FILE *f2;

  f1 = fopen("/tmp/вход","r");
  f2 = fopen("/tmp/выход","w");

  if (fread(buf, 1, SIZE, f1) > 0) {
    buf[SIZE] = 0;

    if (strncmp(buf, pat, SIZE) == 0) {
      sprintf(str, "% 11s\n", buf);
      fwrite(str, 1, SIZE+2, f2);
    }
  }

  fclose(f1);
  fclose(f2);

  exit(0);
}

Check the result:

./test; grep -q ' привет мир' /tmp/выход && echo OK

What should be done to make UTF-8 code work as if it was ASCII code - not to bother how many bytes a symbol takes, etc. In other words: what to change in the example to treat any UTF-8 symbol as a single unit (that includes argv, STDIN, STDOUT, STDERR, file input, output and the program code)?

#define SIZE 10

The buffer size of 10 is insufficient to store the UTF-8 string привет мир . Try changing it to a larger value. On my system (Ubuntu 12.04, gcc 4.8.1), changing it to 20, worked perfectly.

UTF-8 is a multibyte encoding which uses between 1 and 4 bytes per character. So, it is safer to use 40 as the buffer size above. There is a big discussion at How many bytes does one Unicode character take? which might be interesting.

Siddhartha Ghosh 's answer gives you the basic problem. Fixing your code requires more work, though.

I used the following script ( chk-utf8-test.sh ):

echo -n 'привет мир' > вход
make utf8-test
./utf8-test
grep -q 'привет мир' выход && echo OK

I called your program utf8-test.c and amended the source like this, removing the references to /tmp , and being more careful with lengths:

#include <stdio.h>
#include <stdlib.h>
#include <string.h>

#define SIZE 40

int main(void)
{
    char buf[SIZE + 1];
    char *pat = "привет мир";
    char str[SIZE + 2];

    FILE *f1 = fopen("вход", "r");
    FILE *f2 = fopen("выход", "w");

    if (f1 == 0 || f2 == 0)
    {
        fprintf(stderr, "Failed to open one or both files\n");
        return(1);
    }

    size_t nbytes;
    if ((nbytes = fread(buf, 1, SIZE, f1)) > 0)
    {
        buf[nbytes] = 0;

        if (strncmp(buf, pat, nbytes) == 0)
        {
            sprintf(str, "%.*s\n", (int)nbytes, buf);
            fwrite(str, 1, nbytes, f2);
        }
    }

    fclose(f1);
    fclose(f2);

    return(0);
}

And when I ran the script, I got:

$ bash -x chk-utf8-test.sh
+ '[' -f /etc/bashrc ']'
+ . /etc/bashrc
++ '[' -z '' ']'
++ return
+ alias 'r=fc -e -'
+ echo -n 'привет мир'
+ make utf8-test
gcc -O3 -g -std=c11 -Wall -Wextra -Werror utf8-test.c -o utf8-test
+ ./utf8-test
+ grep -q 'привет мир' $'в?\213?\205од'
+ echo OK
OK
$

For the record, I was using GCC 5.1.0 on Mac OS X 10.10.3.

This is more of a corollary to the other answers, but I'll try to explain this from a slightly different angle.

Here is Jonathan Leffler's version of your code, with three slight changes: (1) I made explicit the actual individual bytes in the UTF-8 strings; and (2) I modified the sprintf formatting string width specifier to hopefully do what you are actually attempting to do. Also tangentially (3) I used perror to get a slightly more useful error message when something fails.

#include <stdio.h>
#include <stdlib.h>
#include <string.h>

#define SIZE 40

int main(void)
{
  char buf[SIZE + 1];
  char *pat = "\320\277\321\200\320\270\320\262\320\265\321\202"
    " \320\274\320\270\321\200";  /* "привет мир" */
  char str[SIZE + 2];

  FILE *f1 = fopen("\320\262\321\205\320\276\320\264", "r");  /* "вход" */
  FILE *f2 = fopen("\320\262\321\213\321\205\320\276\320\264", "w");  /* "выход" */

  if (f1 == 0 || f2 == 0)
    {
      perror("Failed to open one or both files");  /* use perror() */
      return(1);
    }

  size_t nbytes;
  if ((nbytes = fread(buf, 1, SIZE, f1)) > 0)
    {
      buf[nbytes] = 0;

      if (strncmp(buf, pat, nbytes) == 0)
        {
          sprintf(str, "%*s\n", 1+(int)nbytes, buf);  /* nbytes+1 length specifier */
          fwrite(str, 1, 1+nbytes, f2); /* +1 here too */
        }
    }

  fclose(f1);
  fclose(f2);

  return(0);
}

The behavior of sprintf with a positive numeric width specifier is to pad with spaces from the left, so the space you tried to use is superfluous. But you have to make sure the target field is wider than the string you are printing in order for any padding to actually take place.

Just to make this answer self-contained, I will repeat what others have already said. A traditional char is always exactly one byte, but one character in UTF-8 is usually not exactly one byte, except when all your characters are actually ASCII. One of the attractions of UTF-8 is that legacy C code doesn't need to know anything about UTF-8 in order to continue to work, but of course, the assumption that one char is one glyph cannot hold. (As you can see, for example, the glyph п in "привет мир" maps to the two bytes -- and hence, two char s -- "\\320\\277" .)

This is clearly less than ideal, but demonstrates that you can treat UTF-8 as "just bytes" if your code doesn't particularly care about glyph semantics. If yours does, you are better off switching to wchar_t as outlined eg here: http://www.gnu.org/software/libc/manual/html_node/Extended-Char-Intro.html

However, the standard wchar_t is less than ideal when the standard expectation is UTF-8. See eg the GNU libunistring documentation for a less intrusive alternative, and a bit of background. With that, you should be able to replace char with uint8_t and the various str* functions with u8_str* replacements and be done. The assumption that one glyph equals one byte will still need to be addressed, but that becomes a minor technicality in your example program. An adaptation is available at http://ideone.com/p0VfXq (though unfortunately the library is not available on http://ideone.com/ so it cannot be demonstrated there).

The following code works as required:

#include <stdio.h>
#include <locale.h>
#include <stdlib.h>
#include <wchar.h>

#define SIZE 10

int main(void)
{
  setlocale(LC_ALL, "");
  wchar_t buf[SIZE+1];
  wchar_t *pat = L"привет мир";
  wchar_t str[SIZE+2];

  FILE *f1;
  FILE *f2;

  f1 = fopen("/tmp/вход","r");
  f2 = fopen("/tmp/выход","w");

  fgetws(buf, SIZE+1, f1);

  if (wcsncmp(buf, pat, SIZE) == 0) {
    swprintf(str, SIZE+2, L"% 11ls", buf);
    fputws(str, f2);
  }

  fclose(f1);
  fclose(f2);

  exit(0);
}

Probably your test.c file is not stored in UTF-8 format and for that reason "привет мир" string is ASCII - and the comparison failed. Change text encoding of source file and try again.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM