The char type is used to describe individual characters. Most commonly, these will be character constants. For example, 'A' is a character constant with value 65. It is different from "A", a string containing a single character. Unicode code units can be expressed as hexadecimal values that run from \u0000 to \uFFFF. For example, \u03c0 is the Greek letter pi.
Besides the \u escape sequences that indicate the encoding of Unicode code units, there are several escape sequences for special characters.
Escape sequence | Name | Unicode value |
\b | Backspace | \u0008 |
\t | Tab | \u0009 |
\n | Linefeed | \u000a |
\r | Carriage return | \u000d |
\" | Double quote | \u0022 |
\' | Single quote | \u0027 |
\\ | Backslash | \u005c |
You can use these escape sequences inside quoted character constants and strings, such as '\u2122' or "Hello\n". The \u escape sequence (but none of the other escape sequences) can even be used outside quoted character constants and strings. For example:
public static void main(String \u0055B\u005D args)
is perfectly legal \u005B and \u005D are the encoding for [ and ].
To fully understand the char type, you have to know about the Unicode encoding scheme. Unicode was invented to overcome the limitations of traditional character encoding schemes. Before Unicode, there were many different standards: ASCII in the United States, ISO 8859-1 for Western European languages, KOI-8 for Russian, GB 8030 and BIG-5 for Chinese, and so on. This causes two problems: A particular code value corresponds to different letters in the different encoding schemes. Moreover, the encoding for language with large character sets have variable length: Some common characters are encoded as single bytes, others require two or more bytes.
Unicode was designed to solve these problems. When the unification effort started in the 1980s, a fixed 2-byte code was more than sufficient to encode all characters used in all languages in the world, with room to spare for future expansion - or so everyone thought at the time. In 1991, Unicode 1.0 was released, using slightly less than half of the available 65,536 code values. Java was designed from the ground up to use 16-bit Unicode characters, which was a major advance over other programming languages that used 8-bit characters.
Unfortunately, over time, the inevitable happened. Unicode grew beyond 65,536 characters, primarily due to the addition of a very large set of ideographs used for Chinese, Japanese, and Korean. Now, the 16-bit char type is insufficient to describe all Unicode characters.
We need a bit of terminology to explain how this problem is resolved in Java, beginning with Java SE 5.0. A code point is a code value that is associated with a character in an encoding scheme. In the Unicode standard, code points are written in hexadecimal and prefixed with U+, such as U+ 0041 for the code point of the Latin letter A. Unicode has code points that are grouped into 17 code planes. The first code plane, called the basic multilingual plane, consists of the "classic" Unicode characters with code points U+0000 to U+FFFF. Sixteen additional planes, with code points U+10000 to U+10FFFF, hold the supplementary characters.
The UTF-16 encoding represents all Unicode code points in a variable-length code. The characters in the basic multilingual plane are represented as 16-bit values, called code units. The supplementary characters are encoded as consecutive pairs of code units. Each of the values in such an encoding pair falls into a range of 2048 unused values of the basic multilingual plane, called the surrogates area (U+D800 to U+DBFF for the first code unit, U+DC00 to U+DFFF for the second code unit). This is rather clever, because you can immediately tell whether a code unit encodes a single character or it is the first or second part of a supplementary character. For example the mathematical symbol of the set of integers zz has code point U+1D56B and is encoded by the two code units U+D835 and U+DD6.
In Java, the char type describes a code unit in the UTF-16 encoding.
Our strong recommendation is not to use the char type in your programs unless you are actually manipulating UTF-16 code units. You are almost always better off treating strings as abstract data types.
* Example:
=> File: charType.java
package com.webmobileaz;
public class charType {
public static void main(String \u005B\u005Dargs) {
char ch = 'c';
System.out.println(\u0022Hell0\u2122\uDC00\u0022 + ch);
}
}
=> Result: